report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
The JWST—identified by the National Research Council as the top priority new initiative for astronomy and physics for the current decade—is a large deployable space-based observatory being developed to study and answer fundamental questions ranging from the formation and structure of the universe to the origin of planetary systems and the origins of life. Often referred to as the replacement to Hubble, the JWST is more of a next generation telescope—one that scientists believe will be capable of seeing back to the origins of the universe (Big Bang). The JWST will have a large, segmented primary mirror—6.5 meters (about 21 feet) in diameter—which is a leap ahead in technology over the last generation of mirrors. The observatory requires a sunshield approximately the size of a tennis court to allow it to cool to the extremely cold temperature (around 40 degrees Kelvin, or minus 388 degrees Fahrenheit) necessary for the telescope and science instruments to work. The mirror and the sunshield—both critical components—must fold up to fit inside the launch vehicle and open to their operational configuration once the JWST is in orbit. In addition, the observatory will house science instruments—such as a near-infrared camera, a near-infrared spectrograph, a mid-infrared instrument, and a fine guidance sensor—to enable scientists to conduct various research activities. The JWST is an international collaboration among the United States, the European Space Agency (ESA), and the Canadian Space Agency (CSA). ESA will provide the near-infrared spectrograph science instrument, the optical bench assembly of the mid-infrared instrument, and the launch of the JWST by means of an Ariane 5 expendable launch vehicle. CSA’s contribution will be the fine guidance sensor to enable stable pointing. Recently, the JWST program recognized significant cost growth and schedule slippage. In March 2005, NASA identified about $1 billion cost growth, which increased the JWST’s life-cycle cost estimate from $3.5 billion to $4.5 billion. In addition, the program’s schedule slipped nearly 2 years. As a result, the program began a series of re-baselining efforts to revise its acquisition strategy. In summer 2005, NASA Headquarters chartered two independent review teams—an Independent Review Team from NASA’s Independent Program Assessment Office and a Science Assessment Team—to evaluate the program. The Independent Review Team was charged with examining the program’s new cost/schedule/ technical baseline and reported in mid-April 2006 that (1) the JWST’s scientific performance met the expectations of the science community, (2) the technical content was complete and sound, and (3) the Goddard Space Flight Center and contractor teams were effective. However, the team was concerned about the program’s early year funding constraints. The Science Assessment Team, an international team of outside experts, was established to evaluate scientific capabilities of the JWST in the 2015 time frame in light of other astronomical facilities that would be available. The team concluded that the financial savings gained from the reduction in the size of the primary mirror area would not be worth the resultant loss of scientific capabilities. The team recommended relaxing some science requirements and simplifying other aspects of the mission, such as integration and testing, to reduce the program’s cost risk. For example, the team recommended relaxing the contamination requirements, allowing the project to test the mirrors using an innovative approach that will reduce costs. The team also recommended that the JWST de-emphasize the shorter wavelengths, since other astronomical facilities would be available to cover that range. The JWST program recently revised its acquisition strategy to conform to NASA’s acquisition policies; however, the program still faces considerable challenges. GAO best practices work has found that using a knowledge- based approach is a key factor in program success. When we initiated our work and before the program’s recently revised acquisition strategy, program officials intended to have NASA commit to the program and start implementation with immature technologies, according to best practices, and without a preliminary design. During our review, we discussed these shortfalls with NASA officials, and they revised their acquisition strategy to align their decision milestones in accordance with NASA acquisition policy. While this is a good step, the current strategy does not fully incorporate a knowledge-based approach that could reduce the program’s risks by ensuring that resources match requirements at program start. By closely following a knowledge-based approach, the JWST program will increase its chances for success and better inform NASA’s decision making. The JWST contains several innovations, including lightweight optics, a deployable sunshield, and a folding segmented mirror. Although the program began risk reduction activities early to develop and mature some technologies, such as the lightweight segmented folding mirror, the program is challenged with maturing some of its other critical technologies. For example, the sunshield, which consists of five layers of membranes, must be folded for launch but then unfurled to its operational configuration—with enough tension to prevent wrinkle patterns that could interfere with the telescope’s mirrors, but not so much tension to cause tears in the fabric. The sunshield must also be aligned with the rest of the observatory so that only the top layer of the sunshield is visible to the primary mirror and a correct angle between the observatory and the sun and other heat-radiating bodies is maintained to enable the telescope and science instruments to preserve the very cold temperature—about 40 degrees Kelvin—critical for achieving the JWST’s mission. In addition, using passive cooling devices, such as heat switches, to allow specific areas of the telescope to cool down, represent additional challenges since these items will be used in new configurations. NASA also recently substituted the cryo-cooler used for the mid-infrared instrument for a lower technology component to save mass. According to JWST officials, the program recently awarded the development contract for the cryo- cooler. In addition, the micro shutter array, which will allow the JWST to program specific patterns of the electromagnetic spectrum for viewing, is a new technology being developed by the Goddard Space Flight Center and is still at a relatively low level of maturity. JWST officials acknowledge that they are concerned about maturing the cryo-cooler and the micro shutter array. In addition, the program also faces design challenges related to the launch vehicle and the observatory’s stability. For example, program officials told us that they may need to request a waiver because the telescope will not fit within the criteria limits of the launch vehicle’s envelop without making design modifications. Furthermore, due to the late selection of the launch vehicle, the project office and prime contractor are just beginning to discuss interfaces, transportation at the launch site, and the additional space issue with Ariane 5 officials. Also, the project faces the unresolved problem of finding the best way to keep the observatory stable. The large sunshield, observatory attitude changes, and other effects conspire to produce unbalanced torques, which can make the observatory unstable. The project continues to look at ways to resolve this problem, including thrusters to rebalance the observatory, but project officials say this will continue to be a challenge. Another overriding concern is NASA’s inability to test the entire observatory in its operational environment, since there is no test facility in the United States large enough to perform this test. The plan is to incrementally test components and subsystems on the ground in laboratories simulating the observatory’s operational environment and to make extensive use of modeling and simulation. According to the memorandum summarizing the January 2006 System Definition Review, a key concern is that the JWST is pushing the limits of ground test facilities and cannot be tested at the observatory level; therefore, requiring complicated integration and testing with a series of subsystem tests and analyses. In its April 2006 assessment of the JWST program, the Independent Review Team reported that there are several exceptions to the “test as you fly” guideline and that mitigation strategies need to be developed before the end of the preliminary design phase. In March 2005, the JWST program recognized that its cost had grown by about $1 billion, increasing the JWST’s life-cycle cost estimate from $3.5 billion to $4.5 billion. About half of the cost growth was due to schedule slippage—a 1-year schedule slip because of a delay in the decision to use an ESA-supplied Ariane 5 launch vehicle and an additional 10-month slip caused by budget profile limitations in fiscal years 2006 and 2007. More than a third of the cost increase was caused by requirements and other changes. An increase in the program’s contingency funding accounted for the remainder—about 12 percent—of the growth. Despite an increase in the program’s contingency funding, the Independent Review Team found that the contingency funding is still inadequate. In its April 2006 assessment of the JWST program’s re- baselining, the Independent Review Team expressed concern over the program’s contingency funding, stating that it is too low and phased in too late. According to the team, the program’s contingency from 2006 through 2010 of only $29 million, or about 1.5 percent, after “liens” and “threats” is inadequate. The team also stated that a 25 percent to 30 percent total contingency is appropriate for a program of this complexity. The program’s total contingency is only about 19 percent. The team warned that because of the inadequate contingency, the program’s ability to resolve issues, address program risk areas, and accommodate unknown problems is very limited. Therefore, the team concluded that from a budget perspective, the re-baselined program is not viable for a 2013 launch. The team recommended that before the Non-Advocate Review (NAR) leading to program start, steps should be taken by the Science Mission Directorate to assure that the JWST program contains an adequate time-phased funding contingency to secure a stable launch date. The JWST program remains at risk of incurring additional cost growth and schedule slippage because of the technical challenges that must be resolved—immature technologies, design challenges, and testing restrictions. Our best practices work indicates that immature technology increases the risk of cost increases and schedule slips. Unresolved technology challenges can cascade through a product development cycle often resulting in an unstable design that will require more testing and thus more time and money to fix the problems. Subsequently, it will be difficult to prepare a reliable cost estimate until these challenges are resolved. Our past work on the best practices of product developers in government and industry has found that the use of a knowledge-based approach is a key factor in successfully addressing challenges such as those faced by the JWST program. Over the last several years, we have undertaken a body of work on how leading developers in industry and government use a knowledge-based approach to deliver high quality products on time and within budget. A knowledge-based approach to product development efforts enables developers to be reasonably certain that, at critical junctures or “knowledge points” in the acquisition life cycle, their products are more likely to meet established cost, schedule, and performance baselines and therefore provides them with information needed to make sound investment decisions. The marker for the first juncture—knowledge point 1 (KP1)—occurs just prior to program start. At KP1, the customer’s requirements match the product developer’s resources in terms of knowledge, time, and money. At KP 2, the product design is stable, and production processes are mature at KP 3. Product development efforts that have not followed a knowledge-based approach can frequently be characterized by poor cost, schedule, and performance outcomes. We recently reported that NASA’s revised acquisition policy for developing flight systems and ground support projects incorporates some aspects of the best practices used by successful developers. For example, NASA policy requires projects to conduct a major decision review—NAR— before moving from formulation to implementation. Further, before moving from formulation to implementation, projects must validate requirements and develop realistic cost and schedule estimates, human capital plans, a preliminary design, and a technology plan—all key elements for matching needs to resources before commitment to a major investment is made at project start. Figure 2 compares NASA’s life cycle with a knowledge-based acquisition life cycle. NAR (KP1) While the policy incorporates elements of a knowledge-based approach, we also reported that NASA’s acquisition policies lack the necessary requirements to ensure that programs proceed and are funded only after an adequate level of knowledge at key junctures. For example, NASA policy does not require that programs demonstrate technologies at high levels of maturity at program start. Further, although NASA policy does require project managers to establish a continuum of technical and management reviews, the policy does not specify what these reviews should be nor does it require major decision reviews at other key points in a product’s development. These best practices could be used to further reduce program risks. In order to close the gaps between NASA’s current acquisition environment and best practices on knowledge-based acquisition, we recommended that NASA take steps to ensure that NASA projects follow a knowledge-based approach for product development. Specifically, we recommended that NASA (1) in drafting its systems engineering policy, incorporate requirements for flight systems and ground support projects to capture specific product knowledge by key junctures in project development and use demonstration of this knowledge as exit criteria for decision making at key milestones and (2) revise NASA Procedural Requirements 7120.5C to institute additional major decision reviews following the NAR for flight systems and ground support projects, which result in recommendations to the appropriate decision authority at key milestones. NASA concurred with our recommendations and agreed to revise its policies. One of the resources needed at program start is mature technology. Our best practices work has shown that technology readiness levels (TRL)— a concept developed by NASA—can be used to gauge the maturity of individual technologies. Specifically, TRL 6—demonstrating a technology as a fully integrated prototype in a realistic environment—is the level of maturity needed to minimize risks for space systems entering product development. To achieve TRL 6, technology maturity must be demonstrated in a relevant environment using a prototype or model. (See app. II for a detailed description and definition of TRLs and test environments.) A knowledge-based approach also involves the use of incremental markers to ensure that the required knowledge has been attained at each critical juncture. For example, exit criteria at KP1 should include demonstrated maturity of critical technologies, completed trade-offs and finalized requirements, and initial cost and schedule estimates using results from the preliminary design review. The approach ensures that managers will (1) conduct activities to capture relevant product development knowledge, (2) provide evidence that knowledge was captured, and (3) hold decision reviews to determine that appropriate knowledge was captured to allow a move to the next phase. If the knowledge attained at each juncture does not justify the initial investment, the project should not go forward and additional resources should not be committed. Prior to the program’s recent acquisition strategy revision, program officials were not following NASA acquisition policy and were set to commit to the program and start implementation with immature technologies, according to best practices, and without a preliminary design. For instance, the schedule called for convening the NAR before the end of preliminary design. NASA policy indicates that the NAR and Preliminary Design Review (PDR) should be aligned. Even at the pre- NAR in July 2003, the plan had been to have the NAR before the PDR, although the two reviews were closer together than the more recent plan. During our review, we discussed these shortfalls with NASA officials. To their credit, they revised their acquisition strategy to conform to NASA policy. Currently, the mission NAR—upon which the program start decision will be based—will be aligned with the mission PDR (scheduled for March 2008). We believe this is a positive step, since it will ensure that a preliminary design—a key element for matching needs to resources—is established before program start. The revised strategy also splits the NAR into two parts—a technical NAR and a mission NAR. The purpose of the technical NAR (scheduled for January 2007) will be to determine whether the project has successfully retired its invention risk, i.e., critical technologies have achieved TRL 6, according to a NASA official. Technology issues will not be revisited after the technical NAR unless problems arise. However, it is unclear if the critical technologies will be demonstrated to a level of fidelity required by best practices at the technical NAR. Furthermore, the strategy does not fully incorporate a knowledge-based approach that could address the program’s risks by ensuring—through the use of exit criteria—that resources match requirements in terms of knowledge, time, and money before program start. For example: Under a knowledge-based approach, adequate testing is required to demonstrate that key technologies are mature—at TRL 6—prior to program start. This is particularly important for the JWST, given the program’s challenges with testing restrictions and the fact that the observatory cannot be serviced in space. In some cases, such as the sunshield, backup technologies do not exist, thus increasing the importance of adequately maturing and testing critical technologies. If key components—like the sunshield—fail, then the entire observatory will be lost. This requires greater fidelity in the testing, even as early as demonstrating the maturity of key technologies prior to program start. To achieve TRL 6 (the maturity level required by best practices for program start), technology maturity must be demonstrated as a representative model or prototype—which is very close to the actual system in form, fit, and function—in a relevant environment. However, there is risk that the current JWST technology development plan will not result in the appropriate demonstration of technology maturity. For example, the half-scale thermal vacuum test of the entire observatory at Johnson Space Center is currently planned for September 2008, and so the knowledge gained regarding the maturity of the sunshield’s thermal and dynamic performance is pushed out 6 months beyond the PDR/NAR/program start date of March 2008. When JWST program officials briefed us in August 2005, the TRL levels for thermal and dynamic performance of the sunshield were both assessed to be at TRL 4, and the plan to get to TRL 6 was to test these subsystems during this half-scale thermal vacuum test. However, in fall 2005 program officials reviewed the technology development plan and concluded that only the materials for the sunshield’s membrane are technology development items, while other items affecting the configuration and deployment of the sunshield—such as thermal and dynamic performance—are considered engineering challenges. JWST officials stated that earlier testing of sample materials demonstrated the sunshield’s thermal performance and a demonstration using a 1/10th scale model demonstrated dynamic performance and satisfied TRL 6 requirements. However, we have found in our best practices work that demonstrating a technology to a TRL 6 typically involves demonstrating that a prototype—close to the form, fit, and functionality intended for the product—has been demonstrated in an environment that closely represents the anticipated operational environment. In our past review of development programs, we have found that if this level of maturity is not demonstrated before a product development effort is launched, a program increases the likelihood of cost growth and schedule delays as it tries to close the knowledge gap between the technologies’ maturity level and the product’s design requirements. The JWST program’s inadequate contingency runs contrary to another premise of a knowledge-based approach—having sufficient resources in terms of funding available to ensure a program’s success. As discussed in an earlier section, the Independent Review Team stated that the program’s contingency from 2006 through 2010 of only about 1.5 percent after “liens” and “threats” is inadequate. The team warned that, because of the inadequate contingency, the program’s ability to resolve issues, address program risk areas, and accommodate unknown problems is very limited. The team concluded that, from a budget perspective, the re-baselined program is not viable for a 2013 launch. A good basis for making informed investment decisions is essential in the fiscally constrained environment that now exists across the federal government. Our nation faces large, growing, and structural long-term fiscal imbalances. Given the severity of those fiscal challenges and the wide range of federal programs, hard choices need to be considered across the government, and NASA is no exception. NASA must compete with other departments and agencies for part of a constricted discretionary spending budget. In the near future, NASA will need to determine the resources necessary to develop the systems and supporting technologies to achieve the President’s Vision for Space Exploration—while simultaneously financing its other priority programs—and structure its investment strategy accordingly. Initial implementation of the Vision as explained in NASA’s Exploration Systems Architecture Study calls for completing the International Space Station, developing a new crew exploration vehicle, and returning to the moon no later than 2020. NASA estimates that it will cost approximately $104 billion over the next 13 years to accomplish these initial goals. These priorities, along with NASA’s other missions, will be competing within NASA for funding. It will likely be difficult for decision makers to agree on which projects to invest in and which projects, if any, to terminate. The NASA Administrator has acknowledged that NASA faces difficult choices about its missions in the future—for example, between human space flight, science, and aeronautics missions. In the President’s fiscal year 2007 budget request for NASA, the JWST has the largest budget allocation of all programs in the Science Mission Directorate’s Astrophysics Division for the 5-year budget horizon from fiscal year 2007 through fiscal year 2011—nearly $2 billion of the division’s $6.9 billion total budget, or about 29 percent. An inadequately informed decision to commit to the estimated $4.5 billion total funding for the JWST would significantly impact NASA’s science portfolio, since funding given to the JWST will not available for other programs. Early in the planning for how to handle the JWST program’s cost growth, NASA officials recognized the impact that the JWST’s cost growth could have on other programs. In a July 2005 briefing to the Agency Program Management Council soon after the cost growth was identified, NASA officials stated that “something must give if JWST stays in the portfolio.” The choices discussed were (1) relaxing requirements or (2) adding budget and schedule, which would mean that other missions would be deferred or deleted from the portfolio. In addition, committing to the JWST program obligates the government contractually, since it allows the prime contractor to begin implementation tasks on the very long prime contract extending from October 2002 through launch—currently planned for June 2013—plus one year. The contract states that until the project achieves the implementation milestone, contract spending is limited to formulation activities, except for long-lead items and other activities approved in writing. After the implementation milestone is achieved at program start, the contracting officer will notify the contractor by letter to proceed to implementation. According to the contracting officer, the assumption is that this is the go-ahead for the whole program. To make well-informed decisions, NASA needs the knowledge to assess the value of its programs—like the JWST program—in relationship to each other. In May 2004, we reported that, of 27 NASA programs we examined, 17 had cost increases averaging about 31percent. One of the programs in our sample was another infrared telescope program—the Spitzer Space Telescope—and it was plagued by schedule slippages caused by delays in the delivery of components, flight software, the mission operation system, and launch delays, all contributing to a 29.3 percent increase in program costs. In general, we found the programs in the sample lacked sufficient knowledge needed to make informed acquisition decisions. Insufficient knowledge to make informed investment decisions can further complicate the already-difficult choices that NASA faces. Conversely, sufficient knowledge at key junctures can facilitate well-informed investment decisions and protect the government from incurring contractual liabilities before it is appropriate. A knowledge-based approach ensures that comprehensive and comparable programmatic data are obtained. Within the JWST program, NASA officials have accomplished a great deal, such as the development of the large, segmented mirror that is a leap ahead in technology. Moreover, the program has support from the larger scientific community. To enhance the program’s chances for success, program officials have chosen a path forward which follows NASA’s policies for ensuring readiness to proceed into implementation/product development. However, the JWST program’s revised strategy does not fully address the risks associated with the many challenges that the program still faces—including maturing technology, mitigating testing restrictions, and ensuring that adequate funding is available for contingencies. This puts the program at risk of further cost growth and schedule slippage. The program needs to have sufficient knowledge at key junctures to successfully address its challenges and use incremental markers to make certain that resources in terms of knowledge, time, workforce, and money match the requirements. Given the severity of the fiscal challenges our nation faces and the wide range of competing federal programs, hard choices need to be considered across the government, and NASA is no exception. Using a knowledge-based approach for NASA’s new development programs such as the JWST could help the agency make the difficult choices about how to allocate its limited budget resources among competing priorities by utilizing common and consistent criteria in program evaluations. To increase the JWST program’s chances of successful product development, we recommend that the NASA Administrator take the following actions: Direct the JWST program to fully apply a knowledge-based acquisition approach—to include incremental markers—that will not only ensure that adequate knowledge is attained at key decision points, but also hold the program accountable. These markers should include, but not be limited to schedules that demonstrate the maturity of all critical technologies prior to program start; criteria to ensure the validity of test articles; criteria to demonstrate that mature component designs being used in new configurations meet form, fit, and function standards; and criteria to ensure that sufficient contingency funding can be provided and phased appropriately. Instruct the JWST program to continue to adhere to NASA acquisition policy and base the program’s go/no-go review (NAR) decision not only on adherence to that policy, but also on (1) the program’s ability to demonstrate whether it is meeting the knowledge markers outlined earlier and (2) whether adequate funds are available to execute the program. In written comments on a draft of this report, NASA concurred with our two recommendations and outlined actions that the agency plans to take to implement such recommendations. NASA said that it endorses the knowledge-based approach recommended and that it believes the current JWST program plan is consistent with that approach. NASA’s recognition of the value of obtaining knowledge prior to moving to subsequent acquisition phases and acknowledgment that it plans to use exit criteria as knowledge markers for other JWST mission-level reviews are welcome steps toward establishing an agency-wide risk reduction culture. Now, it will be critical for NASA decision makers to enforce adherence to the discipline of the knowledge-based approach and ensure that critical product knowledge is indeed demonstrated before allowing the JWST program to proceed. In the years ahead, NASA decision makers will likely face pressures to grant waivers for going forward with immature technologies, allow programs to be restructured, and thus marginalize accountability. For a program such as the JWST, whose investment is already substantial and successful outcome eagerly anticipated by the science community, adherence to such knowledge-based principles will need to be strictly enforced. As identified in this report, NASA would be well served by applying its own technology readiness standards (reprinted in appendix II) as part of its exit criteria, and demonstrating that critical technologies are at the TRL 6 level prior to program start using a representative model or prototype—which is very close to the actual system in form, fit, and function—in a relevant environment. Emphasis by decision makers on the application of “form, fit, and function standards” and “validity of test articles” as exit criteria for the JWST program start and entry into Phase C will help address our concern that the current JWST technology development plan may not result in the appropriate demonstration of technology maturity prior to program start. NASA’s comments are reprinted in appendix III. We are sending copies of this report to interested congressional committees and to the NASA Administrator. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or lia@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix IV. To assess the extent to which the JWST acquisition strategy follows NASA policy and GAO best practices for ensuring readiness to proceed into implementation, we reviewed NASA policy on program management and compared the JWST project office’s management approach to NASA policy. Additionally, we analyzed the JWST acquisition strategy and benchmarked it to best practices. We interviewed NASA and contractor officials to clarify our understanding of the JWST management approach and technology development plan in relation to NASA policy and guidelines and best practices. To deepen our understanding of JWST technical issues, we attended the 3-day Sunshield Subsystem Concept Design Review as well as the 4-day JWST System Definition Review. To evaluate the impact of the JWST acquisition strategy on NASA’s ability to assess the program and make informed investment decisions in the context of its other priorities, we analyzed available JWST cost and schedule data and conducted interviews with program officials to clarify our understanding of the information. Furthermore, we requested and reviewed documentary support breaking out the components of the cost increases and schedule slippage. We also interviewed program officials to clarify our understanding of the potential impact that investment in the JWST will have on other NASA programs. In addition, we reviewed statements of the NASA Administrator, budget documents, GAO’s High- Risk Series, and GAO’s 21st Century Challenges to better evaluate the JWST’s significance in the larger NASA and federal government context. To accomplish our work, we visited NASA Headquarters, Washington, D.C.; Goddard Space Flight Center, Greenbelt, Maryland; Marshall Space Flight Center, Huntsville, Alabama; Northrop Grumman Space Technology, Redondo Beach, California; and Ball Aerospace and Technologies Corporation, Boulder, Colorado. We performed our review from August 2005 through May 2006 in accordance with generally accepted government auditing standards. None. (Paper studies and analysis.) Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. None. (Paper studies and analysis.) Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Analytical studies and demonstration of nonscale individual components (pieces of subsystem). Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Low fidelity breadboard. Integration of nonscale components to show pieces will work together. Not fully functional or form or fit but representative of technically feasible approach suitable for flight articles. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory Integration of components. High fidelity breadboard. Functionally equivalent but not necessarily form and/or fit (size weight, materials, etc.). Should be approaching appropriate scale. May include integration of several components with reasonably realistic support elements/subsystems to demonstrate functionality. Lab demonstrating functionality but not form and fit. May include flight demonstrating breadboard in surrogate aircraft. Technology ready for detailed design studies. Representative model or prototype system, which is well beyond the breadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in simulated operational environment. Prototype—Should be very close to form, fit and function. Probably includes the integration of many new components and realistic supporting elements/subsystems if needed to demonstrate full functionality of the subsystem. High-fidelity lab demonstration or limited/restricted flight demonstration for a relevant environment. Integration of technology is well defined. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in an operational environment, such as in an aircraft, vehicle or space. Examples include testing the prototype in a test bed aircraft. Prototype. Should be form, fit and function integrated with other key supporting elements/subsystems to demonstrate full functionality of subsystem. Flight demonstration in representative operational environment such as flying test bed or demonstrator aircraft. Technology is well substantiated with test data. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Developmental test and evaluation in the actual system application. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of the last “bug fixing” aspects of true system development. Examples include using the system under operational mission conditions. Operational test and evaluation in operational mission conditions. In addition to the individual named above, Jim Morrison, Assistant Director; Greg Campbell; Keith Rhodes; Sylvia Schatz; Erin Schoening; Hai Tran; and Ruthie Williamson made key contributions to this report. NASA: Implementing a Knowledge-Based Acquisition Framework Could Lead to Better Investment Decisions and Project Outcomes. GAO-06-218. Washington, D.C.: December 21, 2005. NASA’s Space Vision: Business Case for Prometheus 1 Needed to Ensure Requirements Match Available Resources. GAO-05-242. Washington, D.C.: February 28, 2005. Space Acquisitions: Stronger Development Practices and Investment Planning Need to Address Continuing Problems. GAO-05-891T. Washington, D.C.: July 12, 2005. Defense Acquisitions: Incentives and Pressures That Drive Problems Affecting Satellite and Related Acquisitions. GAO-05-570R. Washington, D.C.: June 23, 2005. Defense Acquisitions: Space-Based Radar Effort Needs Additional Knowledge before Starting Development. GAO-04-759. Washington, D.C.: July 23, 2004. Defense Acquisitions: Risks Posed by DOD’s New Space Systems Acquisition Policy. GAO-04-379R. Washington, D.C.: January 29, 2004. Space Acquisitions: Committing Prematurely to the Transformational Satellite Program Elevates Risks for Poor Cost, Schedule, and Performance Outcomes. GAO-04-71R. Washington, D.C.: December 4, 2003. Defense Acquisitions: Improvements Needed in Space Systems Acquisition Policy to Optimize Growing Investment in Space. GAO-04-253T. Washington, D.C.: November 18, 2003. Defense Acquisitions: Despite Restructuring, SBIRS High Program Remains at Risk of Cost and Schedule Overruns. GAO-04-48. Washington, D.C.: October 31, 2003. Defense Acquisitions: Improvements Needed in Space Systems Acquisition Management Policy. GAO-03-1073. Washington, D.C.: September 15, 2003. Military Space Operations: Common Problems and Their Effects on Satellite and Related Acquisitions. GAO-03-825R. Washington, D.C.: June 2, 2003. Military Space Operations: Planning, Funding, and Acquisition Challenges Facing Efforts to Strengthen Space Control. GAO-02-738. Washington, D.C.: September 23, 2002. Polar-Orbiting Environmental Satellites: Status, Plans, and Future Data Management Challenges. GAO-02-684T. Washington, D.C.: July 24, 2002. Defense Acquisitions: Space-Based Infrared System-Low at Risk of Missing Initial Deployment Date. GAO-01-6. Washington, D.C.: February 28, 2001. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-05-301. Washington, D.C.: March 31, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-04-248. Washington, D.C.: March 31, 2004. Defense Acquisitions: DOD’s Revised Policy Emphasizes Best Practices, but More Controls Are Needed. GAO-04-53. Washington, D.C.: November 10, 2003. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-03-476. Washington, D.C.: May 15, 2003. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Program Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 18, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Major Acquisitions: Significant Changes Underway in DOD’s Earned Value Management Process. GAO/NSIAD-97-108. Washington, D.C.: May 5, 1997. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. | The National Aeronautics and Space Administration's (NASA) James Webb Space Telescope (JWST) is being designed to explore the origins and nature of the universe. It should allow scientists to look deeper into space--and thus farther back in time--than ever before. The program, however, has experienced cost growth of more than $1 billion and its schedule has slipped nearly 2 years. NASA recently restructured the program and now anticipates a launch no sooner than June 2013. Because of the cost and schedule problems, under the Comptroller General's authority, we reviewed the JWST program to determine the extent to which this procurement follows NASA acquisition policy and GAO best practices for ensuring that adequate product knowledge is used to make informed investment decisions Although the JWST program recently revised its acquisition strategy to conform to NASA's acquisition policies, the program still faces considerable challenges because it has not fully implemented a knowledge-based approach, which our past work has shown is often a key factor in program success. In a recent report, we made recommendations that NASA take steps to ensure that projects follow a knowledge-based approach for product development. NASA concurred and revised its acquisition policy. When we initiated our work and before the JWST program's recently revised acquisition strategy, program officials intended to have NASA commit to program start, which is the end of the formulation phase and the beginning of the implementation phase, with immature technologies, according to best practices, and without a preliminary design. During our review, we discussed these shortfalls with NASA officials, and they revised their acquisition strategy to conform to NASA policy. However, the current strategy still does not fully incorporate a knowledge-based approach which ensures that resources match requirements in terms of knowledge, time, and money before program start. If program officials follow the current plan, the maturity of key technologies may not be adequately tested prior to program start. In addition, it appears the program will not have sufficient funding resources to ensure the program's success. In light of the fiscally constrained environment the federal government and NASA will face in the years ahead, adopting a knowledge-based approach will not only increase the JWST program's chances for success but also lay the foundation for comparison between competing programs. |
Although estimates of the employment rate for individuals with disabilities vary, researchers and advocates agree that it is much lower than the employment rate for the U.S. workforce as a whole, particularly for individuals whose impairments are severe enough to affect their ability to work. The purpose of the provisions of section 14(c) of FLSA, as stated in the legislation, is to prevent the curtailment of employment opportunities for individuals with disabilities. An individual with a disability eligible to be paid special minimum wages is defined in the regulations as someone “whose earning or productive capacity is impaired by a physical or mental disability, including those relating to age or injury, for the work to be performed.” Determining the impact of the legislation on the employment opportunities of individuals with disabilities severe enough to be eligible to be paid special minimum wages, including whether the purpose of the legislation to not curtail employment opportunities for these individuals is achieved, however, is difficult. Most individuals with disabilities who are employed are not paid special minimum wages under the provisions of section 14(c) of FLSA. Many workers with disabilities cannot be paid special minimum wages because their impairments are not severe enough to affect their ability to perform their jobs; others receive accommodations and special support services that allow them to earn at least the minimum wage. Individuals with disabilities work in many different types of employment settings, including jobs in businesses where they work mainly with individuals who do not have disabilities and in work centers where they often work primarily with other individuals with disabilities. Some individuals with disabilities work in jobs with no special support services. In such cases, they generally are not paid special minimum wages under the provisions of section 14(c) because they are able to perform the work at a fully productive level. Individuals with impairments severe enough to affect their ability to perform the work, however, often require support services such as job coaches or special on-site supervision in order to obtain and retain their jobs and many are paid at special minimum wage rates. Many of the work centers that employ individuals with disabilities are nonprofit organizations established to provide support services and training as well as employment opportunities for these individuals. Many of these work centers were established by groups of parents of individuals with disabilities and by vocational rehabilitation specialists. Work centers receive much of their funding through state and county agencies from funds provided for support services and vocational training for individuals with disabilities. State and county agencies usually provide funds to work centers in the form of grants or reimbursements for services. To carry out Labor’s oversight of the provisions of section 14(c) of FLSA, the Secretary issued regulations that define the requirements of the law and delegated authority to WHD for the administration of the special minimum wage program. WHD staff review and approve employers’ applications for new 14(c) certificates and renewals that allow them to pay individuals with disabilities less than the federal minimum wage, which is currently $5.15 an hour. Work centers and hospitals are required to renew their 14(c) certificates every 2 years; businesses and schools must renew their 14(c) certificates annually. Because most 14(c) employers are work centers that have employed 14(c) workers for many years, most of the 14(c) certificate applications that WHD staff review are applications for renewal. In states that have a higher minimum wage rate than the federal minimum wage, the state rate takes precedence. Employers are required to establish the special minimum wage rate(s) for each worker they employ under the act and to show how they established these rates in the certificate application packages they submit to Labor. The process of establishing special minimum wage rates is complex. First, employers must identify the prevailing wage in their geographic area for experienced workers who do not have disabilities that affect their ability to perform the work and who perform the same or similar work. They must then measure the actual productivity of the workers for each job they perform as compared to the productivity of experienced workers who do not have disabilities. Finally, employers must calculate the special minimum wage rate by applying the worker’s productivity rate to the prevailing wage for the job and factoring in the quality of the work performed. For example, if a 14(c) worker’s productivity for a specific job is 50 percent of that of experienced workers who do not have disabilities that affect their work, and the prevailing wage paid to experienced workers for that job is $6.00 an hour, the special minimum wage rate for the 14(c) worker in performing that job would be $3.00 an hour. Workers are paid either hourly rates of pay or piece rates for the number of pieces they produce. Most service jobs are paid at an hourly rate, while most assembly work is paid at a piece rate. Employers are also required to obtain a 14(c) certificate in order to pay workers with disabilities less than the hourly wage for contracts covered under the Service Contract Act and the Walsh-Healey Public Contracts Act. These rates are often higher than the federal minimum wage, particularly for work performed under the Service Contract Act. Therefore, 14(c) workers employed under these contracts may earn more than the federal minimum wage. For example, a worker who has a disability might work under a contract covered by the Service Contract Act for which the contract rate is $15.00 an hour. If a 14(c) worker is only able to perform the work at 50 percent of the productivity level of workers who do not have disabilities that affect their ability to perform the work, he or she would be paid $7.50 an hour. Labor monitors and enforces employer compliance by reviewing employers’ 14(c) certificate application packages and by conducting investigations of employers. The 14(c) certification team in WHD’s Midwest Regional Office verifies that employers have correctly computed the special minimum wages of their workers by reviewing the documentation in their 14(c) certificate application packages. WHD selects employers for 14(c) investigations either through complaints filed on behalf of workers (by the workers themselves or by their parents or guardians) or by conducting self-initiated investigations of employers selected through criteria developed by WHD officials. The WHD Regional Administrators, with guidance from WHD’s national office, set the priorities for investigations of employers. Labor’s WHD investigators are responsible for enforcing compliance with all aspects of FLSA, including the provisions of section 14(c). When Labor determines through a review of an employer’s certificate renewal application package or an investigation that an employer has underpaid its 14(c) workers, it requires the employer to compute the amount of back wages owed to these workers for a period of 2 years prior to the date of Labor’s review. In addition to monitoring and enforcing compliance through its reviews of employer’s 14(c) certificate renewal applications and investigations of employers, Labor ensures employer compliance through its training and outreach efforts for its own staff and 14(c) employers. Because the process of establishing special minimum wage rates is complex, Labor considers these efforts to be an important aspect of its oversight responsibilities for the special minimum wage program. The majority of 14(c) employers are nonprofit work centers established to provide support services and employment to individuals with disabilities. The centers provide jobs for 14(c) workers most often in light assembly work done by hand or in service-oriented jobs such as grounds maintenance or janitorial work. Most of these jobs are carried out under contracts with government agencies and private companies. The major sources of funds for work centers are state and county agencies and contracts. While virtually all work centers offer one or more support services designed to help 14(c) workers perform their jobs, the range of services they are able to provide depends on the availability of funding and the eligibility criteria established by the funding agencies. About 5,600 employers were paying workers with disabilities special minimum wages under certificates issued under the provisions of section 14(c) at the time of our survey in 2001 (see table 1). About 4,700 (84 percent) of these employers were work centers. Work centers employed about 95 percent of all 14(c) workers. More than 80 percent of the work centers were private, nonprofit entities; 13 percent of the work centers were state or local government organizations. Businesses accounted for 9 percent of the employers, hospitals or other residential care facilities accounted for 5 percent, and less than 2 percent were schools. The work centers primarily employed workers who have disabilities, 90 percent of whom were 14(c) workers. On average, each work center employed 86 workers at special minimum wages. The work centers mainly employed workers with mental retardation or other developmental disabilities. Some work centers focused on employing workers who were blind or had other visual impairments, although they comprised a relatively small number (50 work centers). Work centers offer individuals with disabilities a variety of work, most of which involves assembly or is service-related. (See table 2.) Assembly jobs generally involve uncomplicated one- or two-step processes that are mainly performed by hand. For example, 14(c) workers at a work center in Illinois that we visited assembled small plastic automobile parts, while 14(c) workers at a New York work center snapped together plastic pieces to assemble a lint remover. The service-related jobs involved basic tasks, such as mopping floors and picking up trash. For example, 14(c) workers from a California work center maintained restrooms at public beaches under contracts with local city governments. Work center managers balanced their understanding of what work was feasible for their 14(c) workers with their knowledge of the work available in the area. Work center managers also considered where jobs were located. Most jobs in assembly, production, sorting, and collating could be easily performed in the work center. However, jobs such as grounds maintenance and janitorial work had to be performed off-site. If a work center was not located within a reasonable commuting distance, work center managers might decide that these jobs were not feasible for their 14(c) workers. For example, managers at a work center in Illinois did not pursue jobs in neighboring communities that posed a difficult commute for their workers. Most work centers provided jobs through contracts with government agencies and private companies. Work center managers at some of the sites we visited told us that they were most likely to contact local companies to find jobs that could be done by their 14(c) workers. According to the director of a work center in Virginia, contracts with private companies, particularly for products, were often for tasks in a production process that would not have been cost-effective for the company to automate. In addition, a manager at a California work center said that several of their contracts came from small companies that were marketing new products and did not yet have enough data to know how much their production costs would be. These companies found that work center contracts could be used to do test runs of a new product at a relatively low cost. Overall, we found that 22 percent of the work centers had preferential contracts with state or local governments, and 17 percent of the work centers had contracts with federal agencies. The photos in figure 1 depict examples of the types of jobs performed by 14(c) workers and the types of products they assemble. In addition to providing employment opportunities, work centers also offer a number of support services for 14(c) workers designed to enable them to obtain and perform their jobs. Depending on the type of support services needed by the 14(c) workers, work center staff either provide the services themselves or help to obtain these services from other sources. Essentially all (99 percent) of the work centers provided or helped obtain one or more of a wide range of support services that enabled 14(c) workers to obtain or perform their jobs. (See table 3.) For example, some workers with mental retardation could not drive and were unable to use public transportation without assistance. We found that almost all (97 percent) of the work centers provided or helped obtain transportation for their workers. They also provided support services such as psychological counseling and speech therapy to help 14(c) workers function more effectively both on and off the job. Most work centers also provided one or more accommodations consistent with the definition of reasonable accommodation in the Americans with Disabilities Act of 1990. Our survey showed that 95 percent of the work centers provided work schedule modifications, 85 percent provided job restructuring, and 72 percent provided specialized equipment not required by workers without disabilities. In many cases, work centers accomplished these accommodations through the support services they provided to their 14(c) workers. For example, job restructuring, an example of a reasonable accommodation, could be accomplished using task adaptation, such as breaking a complex task into several small tasks performed by more than one worker, or job station adaptation, such as lowering the height of a table to accommodate someone in a wheelchair. From our site visits, we found that the state and county agencies that provided funds to the work centers to pay for support services had different levels of funding available and different eligibility criteria for these services. State policies and criteria for reimbursements and grants varied across the states we visited. According to work center managers, the availability of state grants or reimbursements for services, and the centers’ or their workers’ ability to meet state eligibility criteria for these funds dictated the type and level of support services their centers provided. For example, for one program at the work center in California we visited, the state required workers to work at least 20 hours a week, have an attendance rate of 85 percent, and have a productivity level of at least 10 percent. Another program at the work center designed to prepare workers to move from the work center to jobs in the community was limited to 30 slots because of state funding limits. To pay for their operating costs, including the provision of support services, work centers obtain funds from two primary sources, government agencies and contracts. From the survey, we found that, on average, nearly half (46 percent) of the funds received by work centers were grants and reimbursements from third parties—mostly state and county government agencies—primarily for the provision of support services. The other major source of funding for work centers was contracts for the production of goods and services, which accounted for about 35 percent of their funding. Figure 2 shows the sources of the funding for all work centers. Each site we visited had several sources of funding; the proportion of funds from each source varied for each location. (See table 4.) For example, while the Virginia work center received most of its funds from its production contracts, the center in Georgia received almost all of its funds from state and county agencies. According to some of the work center managers who responded to our survey and managers at the sites we visited, without the provisions of section 14(c), work centers would need to obtain additional funds to continue to operate at their current levels. Work centers’ payroll expenses would increase, significantly in some cases, if they were required to pay their 14(c) workers the federal minimum wage of $5.15 an hour. For example, at one of the work centers we visited in New York, the total wages of the 14(c) workers would increase from about $77,000 to about $289,000 if the work center paid all of its current 14(c) workers at the federal minimum wage rate. (See table 5.) We also found from our site visits that the work centers’ funding from production of products and services did not cover all costs associated with production. For example, one of the New York work centers obtained about $275,000 from its production contracts. This amount did not cover its costs which, in addition to the wages of its 14(c) workers, included additional direct expenses of about $690,000 for the salaries of its supervisors and support staff and other expenses, such as the cost of materials. From the survey, we estimate that most 14(c) workers have mental retardation or another developmental disability as their primary impairment and earn very low wages. For more than half of the 14(c) workers, low productivity results in an hourly wage rate that is less than half the federal minimum wage. In addition, the majority of 14(c) workers work less than full-time. The 14(c) workers are primarily from 25 to 54 years of age and have been employed in the work centers for several years. At the sites we visited, we found that many 14(c) workers received federal cash disability benefits in addition to their earnings. We also found that the workers at the sites received nonmonetary benefits from being in a work environment, such as training designed to help them become more independent in their interactions with individuals in the community. Based on our survey and data obtained from Labor, we estimate that currently about 424,000 workers with disabilities earn special minimum wages. Over 400,000 of these 14(c) workers are employed by work centers. The remainder are employed by businesses, hospitals or other residential care facilities, or schools. Workers paid special minimum wages by work centers have a wide range of physical and mental conditions that impair their ability to be fully productive at their jobs. (See figure 3.) From the survey, we estimate that the primary impairment of nearly three-quarters of all 14(c) workers employed by work centers was mental retardation or some other developmental disability. About 46 percent of the workers had more than one disability. These impairments can affect a 14(c) worker’s productivity in several ways. Some workers with mental retardation, for example, need additional supervision to complete their tasks, something that, according to our survey, virtually all work centers provided. The level of supervision needed by workers with mental retardation, however, varies depending on the needs of the individuals. For example, the supervisor-to-worker ratio for most 14(c) workers at the California work center we visited was 1 to 15. However, this work center also had a special unit that provided an even higher level of supervision to those who needed it—1 supervisor to every 4 workers. Some 14(c) workers with mental retardation also require special devices that enable them to perform tasks involving measuring and counting— activities that may be difficult for those with mental retardation to perform. The work center we visited in Texas, for example, devised a wooden jig with holes drilled to a specific depth so that 14(c) workers could automatically attach brass couplings to refrigerator coils at the correct position without having to measure them. The California work center devised a counting board for some of its 14(c) workers who packaged materials. The board was divided into 12 squares so that, by placing one item in each square and putting all of the items in a package after filling up the board, 14(c) workers would know the correct number of items to put in each package without having to count them. Certain physical impairments, such as reduced visual acuity or cerebral palsy, also restrict 14(c) workers’ ability to perform tasks involved in performing their jobs. Workers with these types of impairments also receive special supports that enable them to work. For example, because some workers at an Illinois work center found it difficult to clip plastic automobile parts together using only their hands, supervisors built a lever that helped workers with less strength, or reduced manipulative ability, complete the task. At a work center that employed primarily the blind in New York, workers with limited or no vision used a wooden block as a form for folding visual testing equipment at the proper locations. The photos in figure 4 depict some of the special devices and support services that enable 14(c) workers to perform their jobs. The wages of 14(c) workers employed by work centers nationwide were very low. From the survey, we found that more than half of the 14(c) workers (54 percent) earned less than $2.50 an hour because the productivity levels of the workers, as reported by the work centers, were so low. In the survey, work center managers also reported that most of their 14(c) workers (70 percent) had a productivity level of less than half of that of workers without disabilities performing the same jobs. (See table 8.) The low productivity levels of 14(c) workers result in the low hourly wages they are paid. For example, the average productivity level of 14(c) workers at the sites we visited ranged from a low of 11 percent at the work center in Georgia to a high of 42 percent at the work center in Texas. The average hourly wage rates for 14(c) workers at these sites, which were as low as $0.63 per hour to as high of $3.74 per hour, mirrored the productivity levels at the sites. (See figure 5.) Most 14(c) workers (86 percent) worked part-time (less than 32 hours a week). Nearly half of these individuals worked less than 20 hours per week. (See table 9.) Three-fourths of the 14(c) workers employed by work centers (75 percent) as of the date of our survey were from 25 to 54 years of age. The remaining one-fourth of the workers was evenly divided between those younger than 25 and those 55 or older. A slightly higher percentage of the workers were male (55 percent) than female (45 percent). From the survey, we estimate that more than half of 14(c) workers (55 percent) employed by work centers had worked there for 5 years or more, while we found that some 14(c) workers at the sites we visited had worked there for more than 20 years. From employers’ responses to our survey, we also estimate that 13 percent of the 14(c) workers employed by work centers left the center during calendar year 2000. About 5 percent of the workers who left the center moved into jobs in the community. We do not, however, know whether these jobs were at special minimum wages or at or above the minimum wage. An additional 4 percent of the 14(c) workers remained at the work centers but moved from jobs that paid them special minimum wages to jobs that paid them the federal minimum wage or more. At six of the work centers we visited, at least half of their 14(c) workers received federal cash disability benefits. Depending on the site, anywhere from about half to almost all of their 14(c) workers received Social Security Disability Insurance benefits or Supplemental Security Income benefits for severe impairments that affected their ability to work. At all of these sites, the average monthly earnings of their 14(c) workers were lower than the average monthly Social Security Disability Insurance benefit amount of $787 and, at all but one site, lower than the average monthly Supplemental Security Income benefit of $412. In addition, although federal disability benefits are reduced or eliminated when beneficiaries earn more than certain amounts, most of the 14(c) workers’ earnings at each of the sites we visited were too low to significantly reduce their disability benefits. Most of these workers also qualified for health insurance (Medicaid or Medicare) linked to their disability benefits. Some 14(c) workers also received food stamps and housing subsidies. According to the work center managers and our discussions with a few 14(c) workers and parents or guardians of 14(c) workers at the sites we visited, the workers benefit from opportunities to develop self-esteem, exercise self-determination, and develop socialization skills that being in a work environment can provide. Many of the support services provided by the work centers give workers the opportunity to develop more than their vocational skills. At each of the work centers we visited, staff worked with the 14(c) workers to develop formal plans with both employment and nonwork goals. Employment-related goals usually involved strategies to improve productivity on the current job and included plans to achieve the next step in a career path, such as transition from the work center to work in the community. Nonwork goals involved a variety of activities. For example, the work centers in California and Texas offered classroom training in personal and social adjustment. Training focused on basic topics such as appropriate communication and social behaviors and continued through more advanced topics such as management of finances. Several of the work centers we visited also offered training designed to help 14(c) workers become more independent in their interactions with individuals in the community. For example, the work center in New York that primarily employed the blind offered training to its workers on the development of new skills and behaviors, such as problem-solving and assertiveness skills designed to help them, especially those who also had mental retardation, interact more effectively in the community. In addition, the center offered training to workers on how to shop, use banks, and eat in restaurants. The work centers we visited also focused on enabling 14(c) workers to make their own decisions about their lives, that is, to exercise self- determination. For example, the California work center competed with at least two other training and employment providers for every new client. In most cases, the potential 14(c) worker made the final choice of provider, often with the help of family members. At the California work center, the 14(c) worker was an active participant in the development of his or her individualized plan and participated in all meetings to decide the next step in the plan. The decisions about whether to move from a job at the work center to work in the community or whether to work at all were also left to many of the 14(c) workers at the work centers we visited. For example, at the work center in Georgia, staff helped 14(c) workers plan alternative activities when they no longer desired to work. In addition, the work center in California offered a variety of social activities to 14(c) workers who had retired. Labor’s management of the special minimum wage program is ineffective. Until recently, Labor gave the program low priority, including providing little training or guidance to its own staff or employers and conducting few self-initiated investigations of employers. Although Labor began to place more attention on the program in fiscal year 2000, the agency does not have the data it needs to manage the program and does not adequately ensure employer compliance with the requirements of the program. Labor does not have accurate data on the number of 14(c) employers and workers needed to assess the appropriate level of resources it should devote to the program, does not track the resources it devotes to overseeing the program, and does not compile information on the results of its efforts to ensure employer compliance. Labor also does not adequately ensure employer compliance with the program’s requirements because it does not systematically conduct self-initiated investigations of 14(c) employers and does not follow up when employers do not renew their 14(c) certificates. Finally, Labor does not ensure employer compliance by routinely providing guidance and training on the requirements of the special minimum wage program to its staff and 14(c) employers. Labor officials told us that they have given low priority to the special minimum wage program in past years because WHD’s resources were focused on other enforcement responsibilities, such as detecting violations of child labor laws and protecting low-wage workers in the garment industry. Enforcement was primarily limited to WHD’s reviews of employers’ 14(c) certificate applications. Although WHD reviewed all complaints about employers filed on behalf of 14(c) workers, the agency conducted few self-initiated investigations and there was no mandate from WHD headquarters to conduct self-initiated 14(c) investigations. In fiscal year 2000, according to WHD headquarters and regional officials, Labor began to place renewed emphasis on the program, including reinstating the 14(c) specialist positions in its regional offices, increasing training of its own staff and employers, updating the written guidance provided to its investigators, and selecting employers for self-initiated 14(c) investigations. However, despite this renewed emphasis, Labor’s performance plan for fiscal year 2000 contained no mention of the special minimum wage program, although the plan contains specific goals for other WHD special enforcement programs, such as child labor and agricultural workers. In addition, Labor has not systematically reviewed the results of its increased emphasis on the program, including obtaining the data it needs to effectively manage the program or reviewing the results of its increased enforcement efforts. Labor cannot properly manage the program because it does not have accurate information on the number of employers or workers that participate in the special minimum wage program, the resources it devotes to overseeing the program, or the results of its oversight efforts, including its reviews of employers’ 14(c) certificate applications and its investigations of employers. Labor is not able to provide accurate counts of the number of employers and workers participating in the special minimum wage program—the starting point for determining what resources it should allocate to the program. When asked to provide this information on employers, Labor gave us three different lists. The number of employers on these lists ranged from 4,795 to 8,493, and the number of workers ranged from 242,470 to 417,002. Although Labor officials were unable to explain these discrepancies, when we reviewed the information in its databases on 14(c) employers, we discovered that they contained out-of-date and duplicate information and that Labor overstated the number of 14(c) employers.For example, the databases contained information on 261 employers whose 14(c) certificates had expired between January 1, 2000, and August 31, 2000, but the database contained no indication that these certificates had been renewed. We followed up with some of these employers and found that, according to the employers, some of their certificates had actually been renewed, but Labor had not updated the information in the database. A few of the employers, however, told us they no longer employed workers at special minimum wages although Labor still counted them as current 14(c) employers. We also found, through our attempts to mail out our survey, that some employers had gone out of business and should have been deleted from Labor’s list of current 14(c) employers. In addition, from the survey we found that about 8 percent of work centers and businesses with 14(c) certificates did not employ any workers at special minimum wages. Nonetheless, Labor included these employers in its count of current 14(c) employers. We also found that Labor’s data on the number of 14(c) workers are inaccurate. When we compared the number of workers listed by employers on their 14(c) certificate applications to the number of workers on supplemental forms in their application packages, these numbers did not always match. These inconsistencies may have been caused by language in Labor’s application form that may be confusing to employers, as we reported in a previous correspondence to Labor. For example, the form requires employers to report the number of 14(c) workers they employ in two different places on the form. The instructions for both items, however, are confusing and, as a result, employers may report the wrong number of workers in one or both items on the form. WHD officials told us they are in the process of revising the 14(c) certificate application. They also indicated that, to improve the accuracy of the information in their database on 14(c) employers, they are in the process of verifying its accuracy by comparing the numbers of 14(c) employers and workers in the database to the numbers in employers’ 14(c) certificate application packages (the paper files maintained by WHD’s Midwest region). WHD officials told us they planned to complete this verification process in fiscal year 2001. In addition to the lack of data on the size of the special minimum wage program, Labor officials told us they do not compile the number of staff hours devoted to it. As a result, Labor cannot determine whether it is devoting an adequate amount of staff resources to the program. During 2001, there were about 15 WHD headquarters and regional staff members assigned to the program, but about half of them worked on the program only part-time. Because WHD officials do not routinely obtain reports from WHD’s investigations database on the amount of time spent on 14(c) investigations, they were unable to tell us how much time WHD investigators responsible for conducting various types of investigations spend on 14(c) cases, even though the investigators enter the number of hours they spend on 14(c) investigations into the database. Labor does not have accurate data on the number, timeliness, or results of its reviews of employers’ 14(c) certificate applications, its primary method of ensuring employer compliance with the requirements of the special minimum wage program. Employers submit applications to WHD’s 14(c) certification team for new 14(c) certificates and to renew existing certificates. WHD’s 14(c) certification team reviews the paperwork submitted by employers to make sure it is complete and checks for and corrects errors in employers’ calculations of special minimum wage rates. If the 14(c) certification team detects errors in the computation of workers’ special minimum wages in employers’ renewal applications, it assesses back wages for a period of 2 years prior to the date of the application. WHD officials told us that they do not collect information on the number of reviews of 14(c) certificate applications performed by the 14(c) certification team, the number of 14(c) certificates issued, or the timeliness of the process. This is information Labor needs to properly manage the workload of the team and to ensure that all employers who are required to have a 14(c) certificate in order to pay workers special minimum wages have a current certificate. For example, during our site visits, we found that one work center had not received its new 14(c) certificate 3 months after it had applied for renewal and had not been contacted by a member of the 14(c) certification team. WHD does, however, record information on the results of its reviews of 14(c) employers’ certificate renewal applications when employers are assessed back wages, although we found some problems with the accuracy of this information as noted below. WHD staff told us that many of its reviews of 14(c) certificate renewal applications were not recorded promptly and, in our reviews of WHD’s databases, we found that information on some of these reviews had not been correctly entered into the system. Labor also does not compile information on the results of all of its reviews of employers’ 14(c) certificate renewal applications. According to data recorded by WHD’s 14(c) certification team on its reviews of employers’ 14(c) certificate renewal applications from fiscal years 1997 through 2000, WHD identified 811 instances in which employers had miscalculated the special minimum wage rates and, as a result, owed back wages to their 14(c) workers. We could not determine, however, what proportion of its reviews of employers’ certificate renewal applications that these 811 cases represented, because WHD does not track the total number of reviews performed by the certification team. In 42 instances, the 14(c) workers were underpaid by relatively large amounts: the back wages assessed for the 2-year period were over $200 per worker, on average. However, when we asked WHD officials for information from its investigative database on these reviews, data that they do not routinely compile, they provided us with information that indicated that many of these reviews were not recorded accurately in the database. Labor does not have accurate data on investigations conducted of 14(c) employers, another method it uses to ensure employer compliance with the requirements of the special minimum wage program. We found that WHD’s database on investigations contains inaccurate information on its investigations of 14(c) employers. For example, when asked for the number of investigations conducted from fiscal years 1997 through 2000, data that Labor does not routinely compile, WHD officials gave us reports that showed that investigators completed a total of 234 14(c) investigations in that time period. However, after comparing the reports to records of the reviews of employers’ 14(c) certification renewal applications, we found that 93 of the investigations listed in the database were actually reviews of 14(c) certificate renewal applications. In addition to not having accurate information on its compliance efforts, Labor does not track the rate at which employers incorrectly calculate special minimum wage rates and consequently underpay 14(c) workers. Labor needs this information to properly assess the level of resources it should devote to its efforts to ensure employer compliance and to evaluate the effectiveness of its oversight of the special minimum wage program. Labor does not effectively ensure employer compliance with the requirements of the special minimum wage program. Labor does not monitor employer compliance with program requirements by systemically conducting self-initiated investigations of 14(c) employers, and does not follow up with employers when they do not respond to its 14(c) certificate renewal notices. In addition, Labor provides little guidance and training to its staff and 14(c) employers on the requirements of the special minimum wage program. Despite the results of its reviews of employers’ 14(c) certificate renewal applications that show that some 14(c) workers are underpaid because employers calculated their special minimum wage rates incorrectly, for several years WHD investigators only conducted 14(c) investigations when someone filed a complaint about an employer. WHD officials told us that, prior to 2000, they had not conducted self-initiated investigations of 14(c) employers for several years. In 2000, WHD began conducting self-initiated investigations as part of its renewed emphasis on the special minimum wage program. Unlike WHD’s reviews of employers’ 14(c) certificate renewal applications, self-initiated investigations are conducted at the employer’s work site. During these investigations, Labor reviews employers’ records for their 14(c) workers and verifies their measurements of workers’ productivity on which the special minimum wages are based. WHD officials told us they plan to conduct 70 self-initiated investigations of 14(c) employers in their Northeast Region during 2001 and use the results of these investigations to “discern compliance trends” in the special minimum wage program. According to these officials, they plan to investigate a nonrandom sample of five employers in each of the 14 districts in the region. In 2002, WHD plans to conduct 14(c) investigations in other regions, but it has developed no guidance for the regions on how to sample employers, how many to sample, how often these investigations will occur, or how to use the results of the investigations to calculate the employer compliance rate. Because Labor is not selecting employers for 14(c) investigations on a random basis, it will not be able to use the results of these investigations to estimate the rate of compliance for employers. In addition, because Labor currently has no plans to periodically measure employer compliance through self-initiated 14(c) investigations, it will not be able to examine trends in compliance over time. An indication that employers are not complying with the requirements of the special minimum wage program is their failure to renew their 14(c) certificates. WHD sends a renewal notice to employers about 2 months prior to the date their 14(c) certificates expire to remind them to submit an application for renewal. However, WHD does not follow up with employers when they do not respond to the renewal notices to make sure that they are not paying workers with disabilities special minimum wages without the authority to do so. WHD officials were not able to tell us how many employers fail to renew their 14(c) certificates because they do not track the number of employers that do not respond to the renewal notices. They told us they planned to develop this capability but had not done so at the time of our review. Until recently, Labor provided no formal training or up-to-date written guidance to its staff to prepare them to detect and prevent noncompliance, such as that caused by employer errors in the calculation of special minimum wage rates. The 14(c) certification team members who review employers’ application packages and issue the 14(c) certificates receive no formal training on the requirements of the special minimum wage program. Similarly, until fiscal year 2001, WHD investigators who conduct investigations of employer compliance with various provisions of FLSA received no formal training on 14(c) investigations. In addition, the 14(c) certification team and WHD investigators were working with guidance that had not been updated for many years. Until very recently, most of the sections of the Field Operations Handbook that described the requirements of the provisions of section 14(c) had not been revised since 1980, and portions of it were much older; the oldest section was last updated in 1963. Labor updated these sections of the handbook and officially issued the new version in June 2001. To address the lack of training for investigators, WHD’s regional staff recently began developing and scheduling training sessions on 14(c) investigations. For example, WHD’s Northeast Regional Office developed a 1-day course on 14(c) investigations that it plans to use to train investigators and had begun conducting training sessions at the time of our review. In addition, WHD headquarters officials told us they were considering incorporating training on 14(c) investigations into the basic training curriculum for investigators, but they had not done so as of the date of our review. Finally, several employers and consultants reported that employers had received inconsistent guidance from WHD staff on the provisions of section 14(c). For example, many employers received inconsistent guidance on the allowance factor for “personal, fatigue and delay” time used in computing piece rates for 14(c) workers. Although the employers had used one allowance factor for several years, many of them, starting in 2000, were told by WHD 14(c) certification team staff that the allowance factor they had been using was incorrect and, as a result, the employers owed back wages to their 14(c) workers. We asked staff in WHD’s Midwest Regional Office whether this was a change in policy and were told that this was not a change in policy but that its staff had not been properly applying the policy in previous years. In addition to not providing training to its own staff on the requirements of the special minimum wage program, Labor has provided little written guidance or outreach to 14(c) employers, although Labor considers this an important part of its efforts to ensure employer compliance. Although some guidance is available on WHD’s Web site—such as a fact sheet on the employment of workers with disabilities at special minimum wages— the guidance does not provide specific information, such as how to compute special minimum wages. Although WHD officials prepared a computer-based presentation for employers that contains specific guidance on how to compute special minimum wages and prepare 14(c) certificate application packages, only two of the employer groups that we contacted had received a copy of it. WHD officials also told us that they had stopped distributing copies of the computer presentation to employers because the presentation needs to be updated to match the information in the revised Field Operations Handbook. The guidance on the special minimum wage program Labor provides to employers is not sufficient. Reviews of employers’ 14(c) certificate application renewal packages by WHD’s 14(c) certification team showed patterns of errors. These errors included incorrect piece rate calculations, use of entry-level wages to determine prevailing wages, rounding errors, and failure to consider the quality of the work in computing special minimum wages. In addition, in the survey, 55 percent of the work center managers reported that they either received no guidance from Labor or considered the guidance they received on some requirements of the special wage program to be inadequate. Because Labor has provided little written guidance to 14(c) employers, several employer groups and consultants developed their own guidance on the requirements of the provisions of section 14(c). For example, we obtained copies of written guidance developed by NISH, the National Industries for the Blind, Goodwill Industries, and two consultants, including handbooks for employers on how to prepare their 14(c) certificate application packages. Recently, Labor developed plans to improve its written guidance to 14(c) employers. For example, WHD headquarters officials said that they plan to release the newly revised Field Operations Handbook to employers, possibly by posting it on WHD’s Web site, although they had not done so as of the date of our review. The officials also said that they have several other initiatives to increase technical assistance to employers, such as establishing a Web site for the special minimum wage program. In addition to providing little written guidance to employers, Labor has done little outreach to employers to inform them about the requirements of the special minimum wage program. For several years, Labor provided no outreach to 14(c) employers. Staff at one of the work centers we visited told us that the regional 14(c) specialist in Atlanta used to provide training at conferences for 14(c) employers on the requirements of the special minimum wage program. However, after WHD eliminated the regional 14(c) specialist positions in 1996, this outreach to 14(c) employers ended. WHD officials told us that they have recently improved their efforts to provide outreach to employers, including reinstating the regional 14(c) specialist positions in 2000. Some of the regional 14(c) specialists have recently begun making presentations to employer groups in their regions. Some employers we spoke with confirmed that Labor had contacted them recently to offer technical assistance. Nationwide, the special minimum wage program provides employment opportunities to over 400,000 workers with disabilities who are not fully productive on the job. The vast majority of these workers are employed by work centers that also offer them a range of support services designed to help them perform their jobs and function more independently in the community. Virtually all work centers that employ 14(c) workers also are nonprofit. Consequently, if these centers were required to pay their 14(c) workers the federal minimum wage, it is likely that the funds they currently receive from contracts that generate jobs for these workers and from other sources would not cover the increase in their payroll costs. Despite the benefits 14(c) workers may receive from the program, calculating special minimum wage rates for workers with disabilities is a complicated process that is prone to error. As a result, Labor’s oversight of the special minimum wage program is important in ensuring that 14(c) workers are not underpaid. Labor is not doing all it can, however, to provide this oversight, and Labor officials acknowledge that the special minimum wage program was given low priority in the past. While the agency is beginning to increase the resources it devotes to the program, it is doing so without adequately monitoring the effectiveness of its efforts. Labor does not know the program’s precise size, the resources currently devoted to it, the rate at which employers comply with program requirements, or the timeliness or results of its oversight activities. Without this information, Labor cannot be sure that it is giving the program the appropriate priority or gauge the effectiveness of its efforts to ensure employer compliance. Labor’s oversight of the special minimum wage program has consisted primarily of reviewing employers’ 14(c) certificate renewal applications and investigating complaints rather than systematically selecting employers for investigation. Labor also has done little to ensure that employers whose 14(c) certificates have expired do not continue to pay workers special minimum wages or to prevent errors in calculating special minimum wage rates by routinely providing training and guidance to its staff and 14(c) employers. In order to obtain the data needed to properly manage the special minimum wage program, we recommend that the Secretary of Labor implement the following: Improve the accuracy of its data on the number of 14(c) employers and workers by (1) deleting out-of-date and duplicate records in its database, (2) continuing to verify the accuracy of its database by periodically comparing it to information in Labor’s paper files and correcting any discrepancies, (3) identifying employers that indicate on their 14(c) certificate applications that they do not intend to employ any workers at special minimum wages and counting them separately from employers that do, and (4) implementing the suggestions in our April 6, 2001, letter to the Director of the Office of Enforcement Policy, Wage and Hour Division for improving the 14(c) certificate application form. Track the number of staff hours WHD headquarters, 14(c) certification team members, 14(c) regional specialists, and investigators devote to managing the special minimum wage program, reviewing applications for new and 14(c) certificates and renewals, investigating complaints related to special minimum wages, conducting self-initiated investigations of 14(c) employers, and other tasks aimed at ensuring compliance with the requirements of the special minimum wage program and use this information to manage the program. Collect and compile data on the number, timeliness, and results of WHD’s reviews of employers’ 14(c) certificate applications and use this information to set performance standards for the timeliness of this process and to determine the appropriate level of resources to allocate to the special minimum wage program. Using the results of its reviews of 14(c) certificate applications and its investigations of 14(c) employers conducted in response to complaints, estimate the rate at which employers miscalculate 14(c) workers’ special minimum wage rates and use this information to determine the appropriate level of resources to allocate to oversight of the special minimum wage program. In order to ensure employer compliance with the requirements of the special minimum wage program, we recommend that the Secretary of Labor carry out the following actions: Conduct self-initiated investigations of a randomly selected sample of 14(c) employers in all regions and use the results to estimate the rate of employer compliance nationwide. After initially estimating the employer compliance rate, Labor should continue to systematically conduct self- initiated investigations of employers as indicated by the results of its compliance efforts, including its reviews of employers’ 14(c) certificate applications and its investigations. Followup with employers that do not respond to 14(c) certificate renewal notices to ensure that they do not pay special minimum wages to their workers with disabilities without authorization and use the information obtained from its follow-up efforts on employers who no longer have 14(c) certificates to update the database on 14(c) employers. Train staff in all of its regions on the requirements of the special minimum wage program contained in the newly revised Field Operations Handbook and incorporate this training into its standard curriculum for investigators. Post the revisions of the sections of the Field Operations Handbook that relate to the special minimum wage program on Labor’s Web site so that they are available to employers. Regularly conduct outreach sessions for employers in each region on the requirements of the special minimum wage program, with special emphasis on correcting errors identified in WHD’s reviews of employers’ 14(c) certificate renewal applications and investigations of employers. We provided a draft of this report to Labor for review and comment. Labor’s comments are contained in appendix V. Labor acknowledged that, in the past, it may not have given sufficient priority to enforcing the provisions of section 14(c) of FLSA, and generally supported our recommendations, noting actions it is taking to implement them. While we commend Labor’s decision to begin placing a higher priority on the special minimum wage program and its efforts to improve the management of the program and ensure compliance with the requirements of the provisions of section 14(c), some of its actions fall short of our recommendations. Specifically, in its efforts to improve its information on 14(c) employers and workers, Labor stated that it intends to correct all database errors by September 30, 2001, and build safeguards into the system to maintain the accuracy and integrity of the data. The agency did not, however, specify what these safeguards would be, or whether it would periodically compare the information in its database to the paper files on 14(c) employers as we recommended. Labor also noted that it had revised the 14(c) certificate application form to include the suggestions contained in our letter dated April 6, 2001. Although Labor made some changes to the form in response to a draft of the letter that we provided to them in February 2001, none of the suggestions contained in the final letter have been made to the revised 14(c) certificate application form sent by Labor to the Office of Management and Budget for approval. We understand Labor’s concern that excluding some employers with 14(c) certificates from its count of employers that participate in the special minimum wage program ignores the fact that there are legitimate reasons why employers may not continuously employ workers at special minimum wages. Including all employers with 14(c) certificates, however, particularly those that do not intend to employ workers at special minimum wages, is misleading because it overstates the number of employers that participate in the program and the accompanying resources needed to ensure that employers are correctly computing special minimum wages for their workers. Therefore, we revised our recommendation to state that Labor should distinguish between employers that indicate on their 14(c) certificate applications that they do not intend to employ workers at special minimum wages from those that do, and count them separately. In response to our recommendation that Labor track the number of hours that WHD staff devote to the special minimum wage program, Labor stated that it is instituting a process for reporting time spent on the program by non-investigative staff and noted that it records the number of staff hours spent by WHD investigators. However, as noted in the report, WHD’s managers of the special minimum wage program do not use this information to manage the program. Therefore, we revised our recommendation to specify that, in addition to tracking the number of staff hours that WHD staff devote to the special minimum wage program, Labor should use this information to manage the program. In response to our recommendation that Labor use the results of its reviews of 14(c) certificate applications and investigations of complaints about 14(c) employers to estimate the rate at which employers miscalculate special minimum wage rates, and consider this rate in making resource allocation decisions, Labor stated that the 14(c) certification team shares information regarding compliance determinations made during the certification process with regional and district staff. Labor gave no indication, however, that it plans to use the results of either its reviews of employers’ certificate applications or its investigations of complaints about 14(c) employers to compute the rate at which employers miscalculate special minimum wages as we recommended. Moreover, Labor’s plan to use the results of its current self-initiated investigations of selected individual 14(c) employers to estimate the rate of employer compliance, rather than using a random sample of employers, as we recommended, is inadequate. Although Labor is conducting self- initiated investigations of 14(c) employers in one region and in each of several district offices, the employers investigated are not selected at random and, as a result, cannot be considered representative of 14(c) employers, in general, in any of those areas, much less the nation, or provide a credible estimate of the extent of noncompliance. We support the emphasis Labor has begun to place on preventing violations of special minimum wage program requirements through increased training of investigators and more concerted outreach efforts to employers. However, we urge Labor to implement our recommendation to incorporate formal training on the requirements of the special minimum wage program into WHD’s standard training curriculum for investigators rather than simply “considering exposing Investigators to the Section 14(c) program during the Basic II Investigator Training Course” because it is unclear how this will provide investigators with the training needed to conduct investigations of 14(c) employers. Copies of this report are being sent to the Chairman, Subcommittee on Workforce Protections, House Committee on Education and the Workforce; the Secretary of Labor; appropriate congressional committees; and other interested parties. The report is also available on GAO’s home page at http://www.gao.gov. If you have any questions concerning this report, please call me on (202) 512-7215. Other contacts and staff acknowledgements are listed in appendix VI. After determining that information on 14(c) employers and workers needed to meet the objectives of our review was not readily available from Labor or any other source, we elected to survey employers nationwide and conduct site visits of a few employers. To identify employers that employ workers with disabilities at special minimum wage rates under the provisions of section 14(c) of the Fair Labor Standards Act (FLSA), we asked Labor to provide us with information on all employers with current 14(c) certificates. Labor provided us with two databases that contained information on employers with 14(c) certificates. We used information from both databases because neither database contained all of the information we needed. We combined the two databases and eliminated records with duplicate certificate numbers and records with certificates that expired prior to January 1, 2000. We then verified the accuracy of this information by comparing the database records for employers in four states to the paper records maintained by the Midwest Regional Office of Labor’s Wage and Hour Division (WHD). Our analysis showed that the combined databases provided information that was sufficiently accurate to allow us to select a statistically valid sample of employers with current 14(c) certificates to receive our survey. We divided the combined data into four groups using the certificate number and another data element that identified the type of employer. These four groups represented the four types of employers: (1) work centers, (2) businesses, (3) hospitals and other residential care facilities, and (4) schools. We selected our samples only from the first two groups: work centers and businesses. We did not select any of the hospitals or schools because they employ only a small number of workers under the provisions of section14(c) and because they are not typical of most 14(c) employers. There were a total of 5,351 work centers and 729 businesses. Because of the special interest of our congressional requesters in how the special minimum wage program applies to individuals with visual impairments, we divided the 5,351 work centers into two subgroups: one for work centers that primarily employed 14(c) workers who are blind (visually impaired) and one for all other work centers. We selected work centers for the first subgroup by identifying work centers that either had the words “blind” or “visual” in the name of the work center or were listed in Labor’s database as having mainly workers whose predominate impairment was a “visual impairment.” We found 77 work centers that met these criteria. After deleting these work centers in the first subgroup, the second subgroup (all other work centers) contained 5,274 work centers. We drew a random sample from the second subgroup of 5,274 work centers and from the entire group of 729 businesses. Because there were so few work centers for the blind, we selected all 77 of them for our survey. Each sample represented the entire population of work centers and businesses with current 14(c) certificates in calendar year 2000 (all work centers and businesses authorized by Labor to employ workers with disabilities at special minimum wages). After selecting our sample, we found that Labor’s databases contained several duplicate records that had not been identified previously, and we deleted these records from our counts for each group of employers and from the sample. For the subgroup of all other work centers, we found that Labor’s databases contained 66 duplicate records (none that were included in the sample); for the subgroup of work centers for the blind, we found 1 duplicate record; and, for the businesses, we found 21 duplicates (12 of these records were included in the sample). Although we deleted the duplicate records, we did not redraw our sample because we determined that the adjusted totals did not affect the sample sizes. In addition, we found from our survey results that some of the businesses and one of the work centers had gone out of business and that Labor had incorrectly categorized some of the employers. We found that nine of the businesses and one of the work centers from the subgroup of all other work centers had gone out of business. We also found that one of the businesses and two of the work centers, one of the work centers for the blind and one from the subgroup of all other work centers, had been incorrectly categorized by type of employer. These errors indicated that the size of the populations for both work centers and businesses were overstated. Therefore, we adjusted our numbers by making the assumption that, if we had contacted all work centers and businesses to which Labor had issued 14(c) certificates, we would have found additional instances in which the employers had gone out of business or in which Labor had incorrectly categorized them. We used the proportion of the initial number of work centers and businesses found to be out of business or incorrectly categorized to estimate the total number of work centers and businesses on Labor’s list that were out of business or incorrectly categorized. We eliminated this estimated number from each group of employers. We adjusted the total numbers for each group and our samples to delete the duplicate records, work centers and businesses that were no longer in business, and those that were incorrectly categorized. After making these adjustments, the total number of all other work centers was 5,189 and the sample size for this group was 551. We received 443 responses for this group, a response rate of 80 percent. For the work centers for the blind, the adjusted total was 75 work centers. We sent out surveys to all of these work centers and received 63 responses, a response rate of 84 percent. For the businesses, the adjusted total was 690 and the sample size was 403.We received 284 responses from the businesses, a response rate of 71 percent. In the survey, we asked work center and business managers for information about their facility and the workers they employed under their 14(c) certificates. We asked them to fill out the survey only for the certificate selected (the certificate number shown on the mailing label), not for any other facilities they managed, if any. We mailed the survey to each work center and business selected to the address listed in Labor’s databases. In some cases, the work center or business gave the survey to another organization, such as a parent organization or work center responsible for completing the employer’s 14(c) certificate application package. See appendix II for a copy of the survey sent to the work centers. One of the objectives of our survey was to obtain a variety of information on the characteristics of 14(c) workers. During our pre-tests of the survey, however, we found that some work center managers had difficulty providing the specific information we requested, such as the number of 14(c) workers they employed at various wage levels. Work center managers told us that, while this information was available in each individual worker’s record, extracting and summarizing it specifically for 14(c) workers would be very time-consuming and would discourage them from responding to the survey. As a result, rather than asking work center managers to provide the precise number of 14(c) workers with a specific characteristic, we asked them to estimate the percentage of their 14(c) workers with the characteristic. We asked them to provide their estimates by checking one of six boxes with the following labels: “None (0%),” “Few (1-19%),” “Some (20-39%),” “About half (40-59%),” “Most (60-79%),” or “All or nearly all (80-100%).” To provide an estimate of the total number of 14(c) workers at each work center with a specific characteristic, it was necessary for us to convert each percentage range estimate provided by the work center manager into a single percentage. To do so, we began by assigning a value to each estimate in the midpoint of the range. For example, for an estimate of “Some (20-39%),” we assigned a value of 30 percent. Then, for each of the eight questions asked in this manner, we summed the midpoint percentage values to determine whether they totaled 100 percent, thus accounting for all 14(c) workers employed at the center. If the values totaled more than 100 percent, we reduced each of the individual midpoint percentage values by the percentage by which the total would have to have been reduced in order to total 100 percent. For example, if the midpoint percentage values we assigned totaled 130 percent, this means that the total would have to be reduced by 30 percentage points, or 23 percent (30 divided by 130), in order to reach 100 percent. In this case, we reduced each midpoint percentage value by 23 percent. Conversely, if the midpoint percentage values we assigned totaled less than 100 percent, we increased each value by the percentage by which the total would have to have been increased in order to reach 100 percent. Thus, using this method, we arrived at a single percentage estimate for each of the eight questions on workers’ characteristics posed in this manner. For each question, we converted each percentage estimate into an estimate of the number of 14(c) workers with the specified characteristic by multiplying the percentage estimate by the total number of workers at the center. Because of the manner in which we estimated the number of 14(c) workers with various employment and personal characteristics, the estimates should not be viewed as highly precise. They are much less precise than estimates based on accurate counts of 14(c) workers with the specified characteristics at each work center sampled. As stated previously, however, based on our experience during the pre-tests of the survey, we believed that it would not have been feasible to obtain such counts for the work centers. The businesses, however, were able to provide this information because they had so few 14(c) workers (three workers, on average) that they did not have difficulty providing this information. Thus, our estimates of the number of 14(c) workers employed by businesses with various employment and personal characteristics should be considered much more precise than those related to 14(c) workers employed by work centers. We also compared employers’ responses to the survey by testing them to determine whether there were any significant differences. We compared the two subgroups of work centers—work centers for the blind and all other work centers—and compared all work centers and businesses. We calculated the estimates included in this report and in appendixes III and IV using only the number of cases in which there was a usable response to a question; we did not include nonresponses in our calculations. Because the estimates we reported from the survey were based on samples of 14(c) certificates, a margin of imprecision surrounds them. This imprecision is usually expressed as a sampling error at a given confidence level. We calculated sampling errors for estimates based on our survey at the 95-percent confidence level. The sampling errors for percentage estimates cited in this report varied but did not exceed plus or minus 5 percentage points, unless otherwise noted. The sampling errors for our estimate of the number of work centers and businesses that employ 14(c) workers did not exceed plus or minus 138 and 27, respectively. The sampling errors for our estimate of 14(c) workers employed by work center and businesses did not exceed plus or minus 46,619 and 258, respectively. We used the data on 14(c) employers from which the survey samples were drawn to select the eight sites visited. We selected one work center each in California, Georgia, Illinois, Texas, and Virginia; two work centers in New York; and one business in California. We selected the sites on the basis of their geographic location, the predominant impairment of the facilities’ workers, and the number of workers paid special minimum wages. To ensure geographic diversity, we selected at least one site in each of Labor’s five regions. In each of the regions, we selected sites that were representative of states with the largest number of 14(c) employers. We also took into consideration the costs associated with visiting each potential site. Because our preliminary analysis of the data showed that the majority of 14(c) employers were work centers, we selected primarily work center sites. We visited seven work centers and one business. Our analysis also showed that most work centers employed 14(c) workers whose primary impairment was mental retardation or other developmental disability. Accordingly, five of the seven work centers we selected primarily employed workers with mental retardation. In addition, because of the interest of our congressional requesters in the special minimum wage program as it relates to workers with visual impairments, we selected one work center that primarily employed workers with visual impairments. Finally, we selected one work center that primarily employed 14(c) workers with mental illness because this group was the second most frequently found in work centers. Additional considerations for our site visit selection were the number of 14(c) workers and the type of work performed. From the information in Labor’s databases on the number of 14(c) workers employed by each work center and business, we determined that the median number of 14(c) workers at each work center was approximately 65 workers. We selected five of the seven work centers, in part, because the data showed that they employed about this number of 14(c) workers. The other two work centers employed a much larger number of workers. During our preliminary interviews, we learned that work centers provide jobs in both production and service-related work for their 14(c) workers. Therefore, we sought a mix of these types of jobs in selecting our sites. Because the Javits-Wagner-O’Day program is one of the sources of contracts for this work, we selected two sites that had contracts under this program, one site that produced products and one that provided service-related work. Because of the special interest of our congressional requesters in work centers that primarily employ 14(c) workers with visual impairments (work centers for the blind), we analyzed their responses separately and compared them with those of all other work centers that employ 14(c) workers. The following tables provide selected statistics that compare data on work centers for the blind with data on all other work centers. We determined whether there was a statistically significant difference between the responses for work centers for the blind and all other work centers by comparing each category with the rest of the categories in the table combined. All of the cited differences are statistically significant unless otherwise noted. The sampling errors for the data in this appendix do not exceed plus or minus 6 percentage points. Work centers for the blind represent about 1 percent of all work centers, and the centers employ less than 1 percent of all 14(c) workers in work centers. The proportion of 14(c) workers to total workers at work centers for the blind was much lower than at other work centers (table 10). Nearly 80 percent of the 14(c) workers at work centers for the blind had a visual impairment as their primary impairment, compared with less than 4 percent at all other work centers. A much higher percentage of 14(c) workers at work centers for the blind had more than one impairment that limited their productivity—70 percent, compared with about 46 percent at all other work centers (table 11). Work centers for the blind provided jobs more often in assembly work or production of a product and much less often in service jobs than other work centers. To provide these job opportunities, work centers for the blind were more likely to rely on contracts for products with federal agencies than other work centers and were much more likely to rely on preferential contracts with state or local agencies than other work centers (table 12). Although work centers for the blind provided a higher proportion of jobs that paid a prevailing wage of less than $7.00 per hour than other work centers, a much higher proportion of these 14(c) workers earned $2.50 or more per hour. The 14(c) workers in work centers for the blind also had higher productivity levels and worked a greater number of hours each week than 14(c) workers at other work centers. Moreover, 14(c) workers at work centers for the blind tended to be older, and a greater proportion of them had worked 5 years or more for their current employer than 14(c) workers at other work centers (table 13). Although the majority of 14(c) employers are work centers, we also surveyed a random sample of businesses that employ workers at special minimum wage rates under the provisions of section 14(c) of FLSA. The following tables provide selected statistics that compare data for work centers that employ 14(c) workers with data for businesses that employ 14(c) workers. We only reported the data elements for which the differences between work centers and businesses were statistically significant; all the differences in the tables were statistically significant unless otherwise noted. The sampling errors for work centers did not exceed plus or minus 5 percentage points and for businesses it did not exceed plus or minus 10 percentage points, unless otherwise noted. Approximately 10 times as many work centers employed 14(c) workers as businesses. Businesses, on average, employed 3 workers at special minimum wage rates, while work centers employed 86 workers (table 14). Businesses were much less likely than work centers to provide work opportunities in assembly and production jobs. The businesses were also less likely to provide accommodations to 14(c) workers to help them perform their jobs, although this difference may relate to differences in the types of work provided (table 15). A greater proportion of 14(c) workers employed by businesses than work centers earned $2.50 or more per hour and their productivity levels, in general, were higher than that of 14(c) workers employed by work centers. However, 14(c) workers in businesses tended to work fewer hours with nearly 70 percent working less than 20 hours a week as compared with 45 percent of the 14(c) workers in work centers (table 16). A higher proportion of 14(c) workers employed by businesses than by work centers had mental retardation or another developmental disability as their primary impairment, while the primary impairment of a lower proportion or 14(c) workers employed by businesses was mental illness. Moreover, a lower percentage of 14(c) workers in business had more than one impairment that limited their productivity than workers employed by work centers. The 14(c) workers employed by businesses tended to be a younger and had not worked as long for their current employer as workers employed by work centers (table 17). Other major contributors to this report are Beverly A. Crawford, Angela A. Miles, Katherine M. Raheb, Ellen L. Soltow, Linda W. Stokes, Ann T. Walker, Joel I. Grossman, Barbara W. Alsip, Corinna A. Nicolaou, and James P. Wright. SSA Disability: Other Programs May Provide Lessons for Improving Return-to-Work Efforts (GAO-01-153, Jan. 12, 2001). Adults With Severe Disabilities: Federal and State Approaches for Personal Care and Other Services (GAO/HEHS-99-101, May 14, 1999). Social Security Disability: Multiple Factors Affect Return to Work (GAO/T-HEHS-99-82, Mar. 11, 1999). Social Security Disability Insurance: Factors Affecting Beneficiaries’ Return to Work (GAO/T-HEHS-98-230, July 29, 1998). Social Security Disability Insurance: Multiple Factors Affect Beneficiaries’ Ability to Return to Work (GAO/HEHS-98-39, Jan. 12, 1998). Social Security Disability: Improving Return-to-Work Outcomes Important, but Trade-Offs and Challenges Exist (GAO/T-HEHS-97-186, July 23, 1997). Social Security: Disability Programs Lag in Promoting Return to Work (GAO/HEHS-97-46, Mar. 17, 1997). Department of Labor: Challenges in Ensuring Workforce Development and Worker Protection (GAO/T-HEHS-97-85, Mar. 6, 1997). People With Disabilities: Federal Programs Could Work Together More Efficiently to Promote Employment (GAO/HEHS-96-126, Sep. 3, 1996). SSA Disability: Return-to-Work Strategies From Other Systems May Improve Federal Programs (GAO/HEHS-96-133, July 11, 1996). Social Security: Disability Programs Lag in Promoting Return to Work (GAO/T-HEHS-96-147, June 5, 1996). | To prevent the curtailment of employment opportunities for disabled persons, the Fair Labor Standards Act allows employers to pay individuals less than the minimum wage if they have a physical or mental disability that impairs their earning or productive capacity. The Department of Labor's Wage and Hour Division (WHD) administers the special minimum wage program. More than 5,600 employers nationwide pay special wages to workers with disabilities; about 84 percent are work centers established to provide employment opportunities and support services to individuals with disabilities. Businesses comprise about 9 percent of these employers; the remaining 7 percent are hospitals or other residential care facilities and schools. Seventy-four percent of the workers paid special minimum wages by work centers have mental retardation or another developmental disability as their primary impairment, and 46 percent have multiple disabilities. From the data received by employers on the productivity of their disabled workers, it is estimated that 70 percent of the workers are less than half as productive as workers without disabilities performing the same jobs. Labor has not effectively managed the special minimum wage program to ensure that disabled workers receive the correct wages because, according to WHD officials, the agency placed a low priority on the program in past years. |
The Army plans to invest about $11 billion developing and procuring the Crusader, an automated, next generation field artillery system. To date, the program has spent about $1.7 billion in development costs. It plans to procure 482 Crusader systems—each system consisting of a self-propelled 155-millimeter howitzer and a resupply vehicle. The Army is developing 2 different resupply vehicles—1 with tracks and 1 with wheels—and plans to procure 241 of each type. The purpose of the Crusader system is to overcome threats from enemy artillery and reconnaissance or surveillance systems as well as have the mobility needed to keep up with Army tanks and fighting vehicles. Figure 1 shows the planned Crusader howitzer, figure 2 the planned tracked resupply vehicle, and figure 3 the planned wheeled resupply vehicle. The Army restructured the Crusader program in January 2000 to align Crusader’s design with the Army’s transformation to a lighter force. The Army’s transformation will affect all aspects of Army organization, training, doctrine, leadership, and strategic plans as well as the types of equipment and technology the Army acquires. The Army expects the transformation to be at least a 30-year process and has not estimated its full cost. The centerpiece of the lighter, more deployable future force is the Future Combat Systems. The Future Combat Systems concept is a system of ground and air, manned and unmanned weapon systems, each under 20 tons that is planned to replace most, if not all, of the Army’s ground combat systems without a loss in lethality and survivability. Artillery systems are among those to be replaced. The Army expects the Crusader system to fill the existing gap in artillery capabilities until it is replaced by the Future Combat Systems. In keeping with the transformation philosophy of lightweight vehicles and ease of deployability, the Army is redesigning Crusader to make it lighter and more deployable, with the goal of reducing the weight of the self-propelled howitzer and tracked resupply vehicle from about 60 tons to about 40 tons each. Program officials said that a lighter system would enhance operational flexibility in employing Crusader in support of any operation. The Crusader is currently in the program definition and risk reduction phase of its development program. In April 2003, the program is scheduled for a milestone B review to determine whether it is ready to enter its system development and demonstration phase. Milestone B is the point at which DOD decides whether to commit major resources to develop and design the system and to demonstrate its integration, interoperability, and utility. The milestone marks the start of the program’s product development. The Army plans to deliver the first full Crusader prototype system in October 2004, followed by a low-rate initial production decision in February 2006, and initial system fielding in April 2008. Based on current Army plans, the Army will begin the Crusader’s product development in April 2003 but before maturing critical Crusader technologies to a level considered low risk relative to best practices. These risks relate less to whether these technologies can be matured, but more to how much time and cost it will take to mature them. If, after starting product development, the Crusader technologies do not mature on schedule and instead cause delays, the Army may spend more and take longer to develop, produce, and field the Crusader system. Crusader performance goals may also be at risk. On the other hand, the Army has made improvements to the management of the Crusader software development process. The maturity of a program’s technologies at the start of product development is a good predictor of that program’s future performance. Our past reviews of programs incorporating technologies into new products and weapon systems showed that they were more likely to meet product objectives when the technologies were matured before product development started. For example, the Ford Motor Company’s practice of demonstrating new technologies in driving conditions before they are included in a new product is essential to ensuring that the new product can be developed on time and within budget. Similarly, we have found that the early demonstration of propulsion and water-planing technologies, essential to the performance of the Marine Corps’ Advance Amphibious Assault Vehicle, has been instrumental to that program’s staying within 15 percent of cost and schedule estimates. Conversely, cost, schedule, and performance problems were more likely to occur when programs started with technologies at lower readiness levels.For example, the enabling technologies for the Army’s Brilliant Anti-Armor Submunition program were very immature at the start of the program, and their delays became major contributors to the program’s subsequent 88-percent cost growth and 62-percent schedule slippage. Separating technology development from product development into two distinct program phases is a best practice of both successful commercial and defense programs. This entails demonstrating all critical technologies at the component or subsystem level in an operational environment during technology development, prior to committing major funding to product development. Under this practice, the critical technologies would be demonstrated in component or subsystem prototypes that are nearly the right size, weight, and configuration needed for the intended product. Such demonstrations need not require a full system prototype of a Crusader vehicle, but can be done using surrogate vehicles. Technology readiness levels (TRL) are a good way to gauge the maturity of technologies. TRLs were pioneered by the National Aeronautics and Space Administration to determine the readiness of technologies to be incorporated into products such as weapon systems. Readiness levels are measured along a scale of one to nine, starting with paper studies of the basic concept, proceeding with laboratory demonstrations, and ending with a technology that has proven itself on the intended product. TRLs are based on actual demonstrations of how well specific technologies perform in the intended application. For example, a technology that has been demonstrated in an operational environment using subsystem prototype hardware (such as a complete cannon system) that is at or near the final system design would be rated as a TRL 7. The individual TRL descriptions can be found in appendix I. DOD has agreed that technology readiness assessments are important and necessary in assisting officials who decide when and where to insert new technologies into weapon system programs. In January 2001, DOD issued a new acquisition instruction that redefined the phases in the defense acquisition cycle and emphasized the role of technology development in the acquisition process. Under the instruction, programs use the concept and technology development phase, which precedes the system development and demonstration phase, for developing components and subsystems that must be demonstrated before integration into the system. The first portion of system development and demonstration phase is dedicated to integrating the components and subsystems into the system. The instruction states that DOD prefers that technology be demonstrated in an operational environment but must be demonstrated in a relevant environment to be considered mature enough for product development in the system development and demonstration phase. According to the TRL descriptions, technology demonstrated in an operational environment is TRL 7 and technology demonstrated in a relevant environment is TRL 6. Maturing technology from a TRL 6 to a TRL 7 represents a major step up in maturity. A technology at the TRL 6 maturity level needs only to be demonstrated as a subsystem prototype or model in a laboratory or simulated operational environment. A technology at the TRL 7 maturity level must be demonstrated as a subsystem prototype at or near the size of the required subsystem outside the laboratory in an actual operational environment. For example, operating a prototype engine on a laboratory test stand that simulates the effects of the vehicle’s weight on the engine would be a TRL 6 level demonstration while operating an engine in a surrogate vehicle or actual prototype that weighed 50 tons, on roads and cross country, would be a TRL 7 demonstration. In June 2001, DOD issued a new acquisition regulation. It stated that technology maturity is a principal element of program risk and directed technology readiness assessments for critical technologies sufficiently prior to selected milestone decision points—including milestone B--to provide useful technology maturity information to the acquisition review process. Although the new regulation recognizes that TRLs enable consistent, uniform discussions of technical maturity across different types of technologies and provides the definitions of TRLs used in this report, it permits the use of TRLs or “some equivalent assessment” when performing a technology readiness assessment. In June 2001, Crusader program office engineers and we assessed the maturity of 16 critical Crusader technologies using TRLs. This joint assessment determined that 10 of the 16 critical Crusader technologies were below TRL 7. Since the Crusader program is not scheduled to commit to product development until April 2003, the Army still has time to mature the 10 critical technologies to a TRL 7 level—demonstrate them in a component or subsystem prototype in an operational environment. However, the Army’s Crusader plans will result in 10 of the critical Crusader technologies remaining below TRL 7 at the milestone B decision and in technology development continuing into the product development phase. As a result, the Crusader program would not reach the low levels of risk that best practices show is needed for meeting product development cost and schedule commitments. Table 1 shows the results of our joint technology readiness assessment. As shown in table 1, if technology develops as planned, eight critical technologies will be at a TRL 6 level of maturity and two will be at a TRL 5 level of maturity at milestone B. While some technologies may embody some risk in meeting requirements, for the most part, the risk in the Crusader technologies involves the amount of time and effort needed to reach maturity. The planned technology maturity levels for the Crusader program at milestone B increase the probability that technical problems, if they occur, will need to be resolved in the higher cost environment of system development and demonstration. Confining delays in maturing technology to a time prior to the start of product development—in an environment where small teams of technologists work in laboratories and are dedicated to perfecting the technology—is critical to saving time and money. Conversely, if delays occur in product development when a large engineering force is in place to design and manufacture the product, delays would be much more costly. In fact, industry experts estimate that a delay during product development costs several times more than a similar delay that occurs before product development. Under the current Crusader acquisition plans, the critical technologies would be demonstrated in two steps after milestone B. Program officials are planning to demonstrate mobility component technologies first and then the remaining critical technologies. They recognize a risk in integrating the Crusader’s mobility components—track, suspension, engine, and transmission—and plan to produce a mobility test rig to demonstrate that integration and to start accumulating reliability data on the mobility components. The mobility test rig would have the additional advantage of demonstrating the maturity of those technologies in an operational environment. The contractor is scheduled to deliver the mobility test rig in December 2003. The test rig would later be rebuilt as a Crusader prototype. The remaining critical technologies would not be demonstrated until after the contractor delivers the Crusader prototypes. The first Crusader system prototype is scheduled for delivery in October 2004 and is to enter testing the same month. Other prototypes would enter testing as they are delivered. The Army plans to award contracts for low-rate initial production long-lead items in March 2005—less than a fourth of the way through the prototype-testing schedule. This leaves little time in the Crusader’s projected system development and demonstration schedule for solving unanticipated problems before the Army awards contracts for long- lead production items. The Army’s approach to readying the Crusader for milestone B is to demonstrate progress toward achieving five of the system’s requirements, two of which are key performance parameters—the cannon rate of fire and the ability to resupply the self-propelled howitzer. For example, a Crusader key performance parameter is that the Crusader cannon be able to fire 10 to 12 rounds per minute; however, the program only needs to demonstrate the ability to fire 6 rounds per minute before milestone B. The demonstrations, called exit criteria, were approved by both the Army and DOD. Among the demonstrations required by the exit criteria, only the cannon system is expected to be demonstrated in an operational environment; the other critical technologies are expected to be demonstrated in a laboratory environment. Moreover, like many other DOD programs, the Crusader program is using risk management plans and engineering judgment, without the benefit of TRLs, to assess technological maturity and mitigate program risk. Risk management plans and engineering judgment are necessary to manage risk in any major development effort like the Crusader. However, we have found in our reviews that without an underpinning, such as TRLs, that allows transparency into program decisions, significant technical unknowns may be judged acceptable risks because a plan exists for resolving them. For example, we recently reported that while DOD judged the technical risks facing the Joint Strike Fighter as acceptable for starting product development, an analysis of TRLs showed that eight critical technologies were below TRL 7, with six technologies at TRL 4 or 5. When problems are encountered in resolving these unknowns, programs often fail to meet promised outcomes, as noted above with the Brilliant Anti-Armor Submunition program. The Army has made improvements to its management of the software development process. Program officials stated that they would continue to aggressively manage the software development program to achieve and sustain the software process improvements. The automated Crusader system will be a software intensive program, projected to use about 1.9 million lines of code. Unlike any previous ground vehicle, all of the major functions of the Crusader are automated, including aiming, loading, and firing the cannon; managing inventory (projectiles and propellant); and resupplying the howitzer with ammunition and fuel. The crew compartment consists of a digital command center, with flat panel displays and re-configurable crew stations that give the crew real-time situation awareness, targeting information, integrated electronic technical manuals, decision aids, and diagnostic information. In 1998, the program began to experience software problems before meeting the software’s preliminary design milestone. In June 1999, the Army decided that there were incomplete areas of the preliminary design and that the software team was not resolving design issues in a timely manner. Additionally, the software engineering team lacked disciplined quality assurance and configuration management practices, which led to some of the problems. In response, the program office tasked a software action team to identify problems and recommend improvements. The team drafted a recovery plan and recommended a number of process improvements for the prime contractor to implement. Program officials used the Software Development Capability Maturity ModelSM to define and determine the software development process maturity. The Software Engineering Institute, part of Carnegie Mellon University, developed the model to measure and rank an organization’s software development and acquisition process. The contractor agreed to mature its software engineering processes to a level where the standard processes for software development, such as project and risk management, are documented and enforced across the organization. According to the Software Engineering Institute, increasing the maturity level of an organization’s software engineering process puts the organization in better position to successfully develop software. As a result of these efforts, the Army and its prime contractor have made improvements to their management of the Crusader software engineering process. Improved areas include requirements generation and validation, quality assurance, configuration management, risk management, schedule and cost estimation, project tracking and control, and peer reviews of software engineering products such as design documents, code, and test plans. In addition, outside experts assisted in software analysis and design. Others were brought in to independently assess the software recovery plan. The contractor implemented a number of changes in the software design process, including the establishment of a common set of software development and management tools shared by all software teams and improved software testing. The program office has also revised the Crusader contract to provide the contractor monetary incentives to produce high-quality software on schedule. Software teams are also tracking progress and reporting it to management on a weekly or biweekly basis and have greatly improved their processes for estimating the size and schedule of the software. As a result of these improvements, the contractor has made more timely deliveries of software. Army officials will need to continue their aggressive management approach because significant amounts of software remain to be developed before the Crusader is fully operational. Program officials stated that they would continue to manage the program to achieve and sustain the software process improvements. The Army has made considerable progress over the past 2 years in redesigning the Crusader to substantially reduce its size and weight. In general, a lighter system offers a number of advantages, such as lower fuel consumption and easier transportation by truck and rail. However, it is uncertain that the requirement to deploy two Crusader howitzers on a C-17 aircraft provides a significant improvement in strategic deployability. Efforts to meet the deployability requirement will be a challenge and may require costly design changes and/or performance tradeoffs. According to an Army official, in October 1999, the Chief of Staff of the Army directed that the Crusader system become lighter and more deployable to better fit in with the Army’s transformation to lighter forces. The Army subsequently revised the Crusader's Operational Requirements Documents to reflect new deployability requirements. Specifically, the documents state that the Crusader vehicles must not exceed 42 tons at curb weight and 50 tons at combat weight; any combination of two Crusader vehicles, at curb weight, must be air transportable on both a C-5 and a C-17 aircraft; and both the C-5 and C-17 aircraft must be able to transport a single Crusader vehicle at combat weight. The main reason for the decision in January 2000 to restructure the program and redesign the Crusader weapon system was to reduce the system’s weight and to improve its strategic deployability by air. However, the Army expects to rarely airlift the Crusader system—only during extreme emergencies—and that, in those circumstances, it would be likely that only small numbers of Crusader systems would be airlifted. Sealift would be the primary means of moving the Crusader system over long distances. In February 1999, the Army reported to Congress that the fielding of a lighter-weight Crusader would provide little in improved strategic deployability over a heavier version. In May 2000, the DOD’s Office of Program Analysis and Evaluation questioned the need to improve the Crusader’s deployability, stating that it is unclear whether airlifting a small force of the heavier Crusaders, when needed, would be a severe burden on airlift. A limited Army analysis comparing the deployability by air of small numbers of the original heavier Crusader with that of the lighter-weight Crusader showed that the lighter-weight Crusader system might not significantly improve the system’s strategic deployability. For example, this analysis showed that the lighter-weight Crusader system would reduce the number of sorties required to carry two Crusader systems and support equipment by 20 percent—one aircraft sortie—over the system’s original, heavier design. The study showed that it would take four C-17 sorties to airlift two of the lighter-weight Crusader systems and support equipment while it would take five sorties to airlift two of the original heavier systems and support equipment. In addition, the heavier Crusader howitzers and both resupply vehicles would arrive loaded for combat while the lighter Crusader howitzers and only one resupply vehicle would arrive loaded for combat. The other resupply vehicle would have to be manually loaded upon arrival. The recent analysis was done with inputs from various Army officials but has not been officially reviewed by the Air Force. Prior to our request, the Army had not formally analyzed the improvements in strategic deployability offered by a 40-ton Crusader over the earlier 60-ton Crusader. Meeting the requirement for carrying two Crusader howitzers on a C-17 aircraft will be challenging. According to the Air Force, the C-17 aircraft is a more versatile aircraft and smaller than the C-5 aircraft. The C-5 is normally used for strategic deployments—into and out of the combat theater—while the C-17 aircraft can be used for both strategic deployments and tactical missions within a combat theater. According to Army and Air Force officials responsible for aircraft loading plans, the only possible way to load two Crusader howitzers on a C-17 aircraft would be back to back. However, they have concerns about this loading method. First, it will be a very tight fit with one howitzer’s cannon barrel expected to be 20 inches from the forward bulkhead (on the edge of a crew safety zone) and the other howitzer’s barrel expected to be within 3 inches of the stowed aft loading ramp. Second, according to an Air Force official, the 59 inches separating the two howitzers may not be enough room to properly restrain the vehicles with heavy chains. In October 2001, the Army performed a preliminary computer analysis of loading two Crusader howitzers on a C-17. It indicated that, if the vehicles dimensions remain the same through redesign, development, testing, production, and fielding, the two howitzers may fit. This analysis also showed that the loading plan would be a very tight fit and does not address the issue of restraining the howitzers during flight. Air Force officials have not reviewed this analysis. Army and Air Force officials told us that it is unlikely they will know if the Crusader can actually be loaded and carried until two lighter-weight prototypes are produced and tested in a C-17 aircraft. Army officials told us that, if carrying two Crusaders on a C-17 aircraft is not feasible, they will still accept the Crusader system because it is a much more capable system than the current self-propelled howitzer system, the Paladin. Program officials also told us that reducing the system’s weight is desirable because it reduced the logistics needed to support the system and improves, among other things, ground transportability and mobility. According to the DOD and the Army, achieving the Crusader’s reduced weight requirement and meeting the 42-ton limit will be a difficult challenge and will require aggressive weight management to mitigate the risks involved with system weight. As of November 2001, the Crusader howitzer is projected to weigh 41.2 tons, which is close to the upper limit of the 42-ton curb weight requirement. This projection, however, is based on computer modeling that is still evolving. The projected weight could change considerably as specific components are fabricated and tested. Program office officials told us that, at this point in time, they have an 80-percent confidence level in the model’s weight projection. The Army has already made significant changes to the Crusader system design to reduce the curb weights of the system’s vehicles. The curb weight of the howitzer is expected to go from 60 tons to a projected weight of below 42 tons. To achieve this weight reduction, the program office is redesigning the Crusader system by reducing the size and payload of the Crusader vehicles, substituting lighter weight materials for some components, and developing, with the Abrams tank program, a lighter weight engine. Additionally, the team plans to remove the heavy armor for top attack and road wheel protection and make it into kits that can be applied when needed in combat situations. To help reduce the overall weight of the Crusader system, the team decided to use a Palletized Load System truck carrying a newly designed resupply module as a second type of Crusader resupply vehicle—a wheeled resupply vehicle. Although the Army has not made vehicle weight a key performance parameter for the Crusader program, it has instituted an aggressive weight management program designed to mitigate the risks associated with maintaining the 42-ton per vehicle weight limit. As part of the weight management program, the Army may have to consider the trade-offs between the system’s weight and the program’s cost, schedule, and performance requirements in order to achieve the required curb and combat weights. The program is also in the position of not being allowed any weight growth during development, production, fielding, and service. Before the Crusader redesign, the program had a 17-percent weight growth expectation for the Crusader vehicles. According to an Army official, if a new capability is added to the Crusader that increases its weight, the Army will have to find a way to reduce the weight of the Crusader by an equivalent amount. The Army’s current schedule to begin fielding the Crusader system and its replacement, the Future Combat Systems, in the same fiscal year—2008— represents a potential risk of investing in duplicative systems to fulfill the same missions. However, at this time it is uncertain that the initial versions of the Future Combat Systems will have the capabilities to meet the Crusader’s missions. The Future Combat Systems are expected to be revolutionary, lightweight weapon systems—20 tons or less—that involve manned and unmanned, ground and air systems, all of which would be digitally networked together. All the vehicles in the system are being designed for transport on a C-130 or similar aircraft—which are smaller aircraft than the C-17. Future Combat Systems vehicles may include command and control systems, reconnaissance systems, direct- and indirect-fire guns, rockets, and antitank missiles. The Future Combat Systems program is in an earlier stage of development than the Crusader—it is still in its initial 2-year concept design. Although the Future Combat Systems is a complex system of systems and the Army is still developing system concepts and technologies, the Army expects that the Future Combat Systems can be developed and produced in much shorter time frames than other weapons programs. Under the current Army schedule, the initial versions of the Future Combat Systems might enter the system development and demonstration phase as early as fiscal year 2003 and the first combat unit is scheduled to be equipped in 2008. Once fully fielded, the Future Combat Systems are intended to replace all of the Army’s heavy weapon systems including the Crusader. Current Army plans show the Crusader to be in the force until 2032 or later. Because all the technologies needed for the Future Combat Systems may not be mature enough to be put into systems, the Army is planning to develop the initial version of the Future Combat Systems with less than its full capabilities and then upgrade it in a number of steps, called blocks, as the required technologies mature. The Army has not defined the capabilities that it can develop in the initial version of the Future Combat Systems, which it hopes will enter product development in 2003. As early as February 2002, the Army plans to award a contract to define these initial capabilities based on technologies that are mature enough to enter system development and demonstration in 2003. Eventually, the Army expects the Future Combat Systems to meet, using advanced technologies, the same artillery missions as the Crusader and eventually replace the Crusader system. While the final weapon technologies have not been selected for the Future Combat Systems, technologies that could provide the systems with capabilities to perform artillery missions similar to or greater than the Crusader include a multi- role armament system. This possible system could feature a 105-mm cannon that may have a non-line-of-sight capability out to a range of about 50 kilometers. Also, the Army is considering an advanced missile system that could be comprised of small-containerized missiles, known as NetFires, which are projected to have a range of 50 to 100 kilometers. A high-level Army official told us that he believes, based on recent technical briefings, that the initial version of the Future Combat Systems will not have the capabilities to meet the same artillery missions as the Crusader. Moving into product development without demonstrating critical technologies in an operational environment increases the risk of cost overruns, schedule delays, and performance shortfalls. As currently planned, the majority of the critical Crusader technologies will have been demonstrated in a relevant environment but not the important operational environment. If the Crusader program follows the approach of moving into product development with less mature technologies, the program will need to continue to develop and demonstrate those technologies while concentrating on integrating subsystems into the system, testing at the subsystem and system levels, and preparing for production. As a result, technical problems, if they occur, will need to be resolved in the higher cost environment of system development and demonstration. On the other hand, demonstrating the critical technologies in an operational environment before entering system development and demonstration could necessitate more time and money than currently planned before the milestone B decision, but such investments would be relatively small compared to solving technical problems after the decision. The Army restructured the Crusader program to improve the system’s strategic deployability by reducing the system’s weight. The lighter-weight system, however, may not provide a significant improvement to strategic deployability. At this time, the Army is making design trade-offs to meet its weight requirement and it is not clear whether the Army can maintain its lighter weight goals throughout the development, production, and fielding of the Crusader system. Given the uncertainty, the Army risks making unnecessary cost, schedule, and performance trade-offs to meet deployability requirements that may not be clearly justified. The Army has not ruled out the possibility that it will field the Future Combat Systems with the ability to meet the same artillery mission as the Crusader in the same year the Crusader is fielded. However, the extent of this apparent overlap will not be clear until the potential capabilities and schedule of the initial version of the Future Combat Systems are determined. Therefore, it is important that the Army ensure that the projected capabilities and schedule for the initial Future Combat Systems are considered in the Crusader milestone decision. To reduce the risk of schedule delays and increased costs in the product development phase of the Crusader program, we recommend that the secretary of defense direct the secretary of the army to dedicate the resources necessary to ensure that the critical Crusader technologies are demonstrated, at the component and subsystem level, in an operational environment before the program commits to product development at milestone B. To confirm the value and usefulness of the Crusader program’s deployability requirement, we recommend that the secretary of defense direct the secretary of the army to conduct an analysis, before the decision to enter product development, to determine how important it is to deploy two Crusaders howitzers on a single C-17 aircraft. If it is important to the Army, we recommend that the secretary of defense direct the secretary of the army to establish, as a key performance parameter, the maximum per vehicle weight that would allow the C-17 aircraft to carry two Crusader howitzers. If the analysis determines that the redesigned Crusader does not significantly improve the system’s military utility, we recommend that the secretary of defense direct the secretary of the army to reduce the priority placed on attaining the 42-ton weight limit. Finally, to ensure the Army does not invest in two weapon systems that will meet the same artillery missions at the same time, we recommend that the secretary of defense direct the secretary of the army to determine, based on available data, the potential capabilities and schedule of the initial version of the Future Combat Systems and the implication of those capabilities and schedule on the Crusader’s utility to the Army before making the decision on beginning the Crusader’s system development and demonstration—currently scheduled for April 2003. In written comments on a draft of this report, the director of strategic and tactical systems, within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, said that DOD did not agree with our recommendation that the Crusader technologies be demonstrated in an operational environment before the program commits to product development. DOD said that the Crusader program was a simulation-based acquisition program and, as such, evaluates system, component, and subsystem performance and technology readiness using modeling and simulation validated with test stands, integration laboratories, and subsystem prototypes. DOD questioned our definition of critical Crusader technologies and said that the track, for example, was selected by us as a critical technology and assessed as a TRL 5 despite the Army’s many years of expertise in track development. DOD also said that the Crusader is currently demonstrating performance equal to or in excess of threshold requirements for the final system. Finally, DOD said that changing the Crusader’s acquisition strategy to accommodate building system level prototypes required to demonstrate TRL 7 for all critical technologies would add significantly to the development time and expense without significantly reducing risk or improving performance. The full text of DOD’s comments is included in appendix II. We agree that modeling and simulation is a key and accepted practice in any modern development program. However, we have found that programs need to demonstrate a high level of technology maturity before committing to product development. As shown by our past reviews, the best practice standard is that technology must be demonstrated, at the component or subsystem level, in an operational environment to be considered mature enough for entering product development. We believe that a program should use this best practice to assure success in meeting its cost and schedule goals. The determination of the critical Crusader technologies was a joint effort between the Crusader program office and us. We defined critical Crusader technologies as those required to meet the Crusader’s key performance parameters and developed the initial critical technology list. Crusader program office engineers reviewed the initial list, suggested revisions, and agreed that the revised critical technologies list was complete and appropriate. Also, our analysts and program office engineers jointly arrived at the appropriate TRL for each critical technology. In addition, DOD’s statement that track should not be a critical Crusader technology or should have been assessed at a higher TRL because of the Army’s many years of expertise in track development underscores the value of the TRL methodology. Track was included as a critical Crusader technology because the Crusader cannot meet its mobility key performance parameter without track. The track was assessed at TRL 5 because the Crusader program was developing a new lighter-weight track. The Army plans to demonstrate it in an operational environment after milestone B. TRLs measure whether sufficient knowledge has been accumulated with respect to each application of a technology, not the development difficulty of the technology or whether the technology has been previously used in another application. The issue is not whether a technology like the newly developed track will ever work, but how much time and effort will be needed to demonstrate its maturity in this application. The Crusader system development and demonstration phase does not have much time between prototype testing and procurement of long-lead items for production to adjust for any delays or problems in prototype testing caused by technology problems. Such delays or problems could either delay the long- lead item procurement or reduce the amount of information available when committing to the procurement. DOD’s assessment that the Crusader system is currently demonstrating performance equal to or in excess of threshold requirements for the final system is based mainly on modeling, simulations, and laboratory tests because the program has not produced the final system. As mentioned above, best practice calls for critical technologies to be demonstrated in an operational environment not in models, simulations, or laboratory environments before entering product development. DOD stated that building the full system prototype required to demonstrate TRL 7 would add significant time and expense to the program. However, demonstrating at TRL 7 does not require a full system prototype but only a prototype of the component or subsystem that contains a new technology. The demonstration can be accomplished by putting the new component or subsystem, such as an engine, on a surrogate vehicle; that is, a vehicle that already exists. The report’s point is that using full system prototypes to demonstrate the maturity of critical technologies during the product development phase, as planned in the Crusader program, is potentially more costly than using component or subsystem prototypes to do so during the technology development phase. Problems that occur during required demonstrations may cause program delays in either phase, but as noted in the report, the delay is more expensive during the product development phase. DOD stated that it partially agreed with our recommendation to conduct an analysis to determine the importance of the deployability requirement and said that the current requirement is not considered a key performance parameter and, as a result, the Army is allowed to make trade-offs between the requirement and system cost and performance. DOD further stated that the Army plans to review the Crusader's requirements prior to the 2003 milestone B decision as required by regulations. We believe that an analysis to determine the importance of deploying two Crusader howitzers on a C- 17 aircraft should be conducted as soon as possible to provide the Army greater flexibility and knowledge in considering its ongoing trade-off decisions needed to meet weight requirements. DOD stated that it partially agreed with our recommendation to determine the potential capabilities and schedule of the initial version of the Future Combat Systems before making the decision to begin Crusader product development and stated that the Crusader’s capabilities are intended to complement rather than compete with or be redundant to the capabilities of the Future Combat Systems. We continue to believe that DOD cannot determine whether the two systems will be complementary or redundant without knowledge of the initial Future Combat Systems capabilities and fielding schedule. DOD does not have this knowledge. We continue to believe that this knowledge needs to be considered as part of the decision to allow the Crusader program to enter product development. We have rewritten the recommendation to clarify its intent. To determine the readiness of the Crusader program to enter the system development and demonstration phase, we assessed, along with engineers from the Crusader Project Office, the current maturity of the critical Crusader technologies using the technology readiness level tool. We identified the Crusader technologies we believed were critical to meeting the Crusader system key performance parameters. Program engineers reviewed our list, suggested revisions, and agreed that the revised critical technologies list was complete and appropriate. After considering the program’s plans for maturing the critical technologies before milestone B, we jointly determined the probable TRL levels of each of the critical technologies at the milestone. This determination assumed that the program office would successfully execute its existing plans for demonstrating some of the technologies before the milestone. To assess the status of the Crusader software development, we used project management criteria derived from Software Engineering Institute's Software Development Capability Maturity Model. We visited the Crusader prime contractor, met with Army and contractor officials, observed software development and test facilities, and examined project information. We also obtained and reviewed project documentation from the prime contractor and the Army program office. To assess the Crusader program’s ability to meet the Crusader reduced weight requirements and improve the Crusader system’s strategic deployability, we analyzed the Army’s plans and requirements for reducing the weight of the Crusader and requested that the Army perform an analysis of the improvement in strategic deployability that the reduced weight Crusader system would provide compared to the original weight Crusader system. For this analysis, at our request, the Army determined the number of Crusader systems to be deployed, the other equipment and supplies that were required to be deployed with the Crusader systems, and the range of the aircraft used for the deployment. We reviewed the results of the Army’s Crusader deployment analysis. To determine whether the Army is developing the Crusader and the Future Combat Systems to be fielded at the same time and to meet the same artillery missions, we analyzed and compared the Crusader and Future Combat Systems schedules and reviewed the Crusader system operational requirements documents. The Future Combat Systems do not have operational requirements documents at this stage of development. Also, we discussed with appropriate officials in the Army’s Objective Force Task Force, the Army’s artillery school, and the Crusader and the Future Combat Systems programs (1) the probability that the two programs would meet their individual schedules and (2) the potential technologies that might be used in the Future Combat Systems to provide it with artillery capabilities. In performing our work, we obtained documents and interviewed officials involved in the Crusader and the Future Combat Systems programs in the Office of the Deputy Chief of Staff for Operations, Washington, D.C.; U.S. Army Training and Doctrine Command, Fort Monroe, Virginia; U.S. Army Field Artillery School and Center, Fort Sill, Oklahoma; the Defense Advanced Research Projects Agency, Arlington, Virginia; the Military Traffic Management Command, Newport News, Virginia; the U.S. Air Force, Air Mobility Command, St. Louis, Missouri; the U.S. Air Force Aeronautical Systems Command, Dayton, Ohio; the Crusader Project Office, Picatinny Arsenal, New Jersey; and the prime contractor’s Minneapolis, Minnesota, facility. We conducted our review between March 2001 and October 2001 in accordance with generally accepted government auditing standards. We also are sending copies of this report to the appropriate congressional committees; the director, Office of Management and Budget; and the secretaries of defense and the army. We will also provide copies to others upon request. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or William R. Graveline at (256) 650-1414. Key contributors are listed in appendix III. Lowest level of technology readiness. Scientific research begins to be translated into technology’s basic properties. Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumptions. Examples are still limited to paper studies. Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. 4. Component and/or breadboard validation in laboratory environment. Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. 5. Component and /or breadboard validation Fidelity of breadboard technology increases significantly. The basic technological in relevant environment. components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components. Representative model or prototype system, which is well beyond the breadboard tested for level 5, is tested in a relevant environment. Represents a major step up in technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in a simulated operational environment. Prototype near or at planned operational system. Represents a major step up from level 6, requiring the demonstration of an actual system prototype in an operational environment. Examples include testing the prototype in a test bed aircraft. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this level represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. 9. Actual system proven through successful Actual application of technology in its final form and under mission conditions, such as mission operations. those encountered in operational test and evaluation. Examples include using the system under operational mission conditions. In addition to those named above, the following individuals made significant contributions to this report: Robert L. Ackley; Nabajyoti Barkakati; Paul L. Francis; Lawrence D. Gaston, Jr.; Matthew B. Lea; Gary L. Middleton; Madhav S. Panwar; Robert J. Stolba; and John P. Swain. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 or (202) 512-7470 (automated answering system). | The Army wants an artillery system with greater firepower, range, and mobility than its current self-propelled howitzer. In 1994, the Army began to develop the Crusader, an advanced artillery system consisting of a self-propelled 155-millimeter howitzer and a resupply vehicle. The Department of Defense (DOD) will decide next year whether the Crusader program should enter its system development and demonstration stage, which will require the commitment of major resources. GAO found that the Crusader program has made considerable progress in developing key technologies and reducing its size and weight. However, more progress and knowledge is needed to minimize the risk of cost overruns, schedule delays, and performance shortfalls. The Crusader program will likely enter product development with most of its critical technologies less mature than best practices recommend. Most of the Crusader's critical technologies have been demonstrated in a relevant environment but not in the more demanding operational environment. Although the Army is reducing the Crusader's weight so that two vehicles can be deployed on a C-17 aircraft, the deployability advantage gained does not appear significant. The reduction in the Crusader system's weight would only decrease the number of C-17 flights needed to transport two complete systems and support equipment from five to four flights. A lighter system offers several other benefits, and knowing the magnitude of the deployability advantage of reduced weight would allow the Army to make better decisions on trade offs. An apparent overlap exists between the Crusader's and the Future Combat Systems' capabilities and schedules. The Army expects the Future Combat Systems to meet the same artillery missions as the Crusader and eventually replace it. The current schedules for initial fielding of the Future Combat Systems and the Crusader system occur in the same year, 2008. The extent of this apparent overlap depends more on the Future Combat Systems than the Crusader because less is known about the Future Combat Systems' technologies. |
The National Flood Insurance Act of 1968 established NFIP as an alternative to providing direct disaster assistance after floods. NFIP, which provides government-guaranteed flood insurance to homeowners and businesses, was intended to reduce the federal government’s escalating costs for repairing flood damage after disasters. FEMA, which is within the Department of Homeland Security (DHS), is responsible for the oversight and management of NFIP. Since the program’s inception, Congress has enacted several pieces of legislation to strengthen it. The Flood Disaster Protection Act of 1973 made flood insurance mandatory for owners of properties in vulnerable areas who had mortgages from federally regulated lenders and provided additional incentives for communities to join the program. The National Flood Insurance Reform Act of 1994 strengthened the mandatory purchase requirements for owners of properties located in special flood hazard areas (SFHA) with mortgages from federally regulated lenders. Finally, the Bunning- Bereuter-Blumenauer Flood Insurance Reform Act of 2004 authorized grant programs to mitigate properties that experienced repetitive flood losses. Owners of these repetitive loss properties who do not mitigate face higher premiums. To participate in NFIP, communities agree to enforce regulations for land use and new construction in high-risk flood zones and to adopt and enforce state and community floodplain management regulations to reduce future flood damage. Currently, more than 20,000 communities participate in NFIP. NFIP has mapped flood risks across the country, assigning flood zone designations based on risk levels, and these designations are a factor in determining premium rates. NFIP offers two types of flood insurance premiums: subsidized and full-risk. The National Flood Insurance Act of 1968 authorizes NFIP to offer subsidized premiums to owners of certain properties. These subsidized premium rates, which represent only about 35 to 40 percent of the cost of covering the full risk of flood damage to the properties, account for about 22 percent of all NFIP policies. To help reduce or eliminate the long-term risk of flood damage to buildings and other structures insured by NFIP, FEMA has used a variety of mitigation efforts, such as elevation, relocation, and demolition. Despite these efforts, the inventories of repetitive loss properties and policies with subsidized premium rates have continued to grow. In response to the magnitude and severity of the losses from the 2005 hurricanes, Congress increased NFIP’s borrowing authority from the Department of the Treasury (Treasury) to $20.775 billion. As of April 2010, FEMA owed Treasury $18.8 billion, and the program as currently designed will likely not generate sufficient revenues to repay this debt. By design, NFIP is not an actuarially sound program, in part because it does not operate like many private insurance companies. As a government program, its primary public policy goal is to provide flood insurance in flood-prone areas to property owners who otherwise would not be able to obtain it. Yet NFIP is also expected to cover its claims losses and operating expenses with the premiums it collects, much like a private insurer. In years when flooding has not been catastrophic, NFIP has generally managed to meet these competing goals. In years of catastrophic flooding, however, and especially during the 2005 hurricane season, it has not. NFIP’s operations differ from those of most private insurers in a number of ways. First, it operates on a cash-flow basis and has the authority to borrow from Treasury. As of April 2010, NFIP owed approximately $18.8 billion to Treasury, primarily as a result of loans that the program received to pay claims from the 2005 hurricane season. NFIP will likely not be able to meet its interest payments in most years, and the debt may continue to grow as the program may need to borrow to meet the interest payments and potential future flood losses. Also unlike private insurance companies, NFIP assumes all the risk for the policies it sells. Private insurers typically retain only part of the risk that they accept from policyholders, ceding a portion of the risk to reinsurers (insurance for insurers). This mechanism is particularly important in the case of insurance for catastrophic events, because the availability of reinsurance allows an insurer to limit the possibility that it will experience losses beyond its ability to pay. NFIP’s lack of reinsurance, combined with the lack of structure to build a capital surplus, transfers much of the financial risk of flooding to Treasury and ultimately the taxpayer. NFIP is also required to accept virtually all applications for insurance, unlike private insurers, which may reject applicants for a variety of reasons. For example, FEMA cannot deny insurance on the basis of frequent losses. As a result, NFIP is less able to offset the effects of adverse selection—that is, the phenomenon that those who are most likely to purchase insurance are also the most likely to experience losses. Adverse selection may lead to a concentration of policyholders in the riskiest areas. This problem is further compounded by the fact that those at greatest risk are required to purchase NFIP insurance if they have a mortgage from a federally regulated lender. Finally, by law, FEMA is prevented from raising rates on each flood zone by more than 10 percent each year. While most states regulate premium prices for private insurance companies on other lines of insurance, they generally do not set limits on premium rate increases, instead focusing on whether the resulting premium rates are justified by the projected losses and expenses. As we have seen, NFIP does not charge rates that reflect the full risk of flooding. NFIP could be placed on a sounder fiscal footing by addressing several elements of its premium structure. For example, as we have pointed out in previous reports, NFIP provides subsidized and grandfathered rates that do not reflect the full risk of potential flood losses to some property owners, operates in part with unreliable and incomplete data on flood risks that make it difficult to set accurate rates, and has not been able to overcome the challenge of repetitive loss properties. Subsidized rates, which are required by law, are perhaps the best-known example of premium rates that do not reflect the actual risk of flooding. These rates, which were authorized from when the program began, were intended to help property owners during the transition to full-risk rates. But today, nearly one out of four of NFIP policies continue to be based on a subsidized rate. These rates allow policyholders with structures that were built before floodplain management regulations were established in their communities to pay premiums that represent about 35 to 40 percent of the actual risk premium. Moreover, FEMA estimates that properties covered by policies with subsidized rates experience as much as five times more flood damage than compliant new structures that are charged full- risk rates. As we have pointed out, the number of policies receiving subsidized rates has grown steadily in recent years and without changes to the program will likely continue to grow, increasing the potential for future NFIP operating deficits. Further, potentially outdated and inaccurate data about flood probabilities and damage claims, as well as outdated flood maps, raise questions about whether full-risk premiums fully reflect the actual risk of flooding. First, some of the data used to estimate the probability of flooding have not been updated since the 1980s. Similarly, the claims data used as inputs to the model may be inaccurate because of incomplete claims records and missing data. Further, some of the maps FEMA uses to set premium rates remain out of date despite recent modernization efforts. For instance, as FEMA continues these modernization efforts, it does not account for ongoing and planned development, making some maps outdated shortly after their completion. Moreover, FEMA does not map for long-term erosion, further increasing the likelihood that data used to set rates are inaccurate. FEMA also sets flood insurance rates on a nationwide basis, failing to account for many topographic factors that are relevant to flood risk for individual properties. Some patterns in historical claims and premium data suggest that NFIP’s rates may not accurately reflect individual differences in properties’ flood risk. Not accurately reflecting the actual risk of flooding increases the risk that full-risk premiums may not be sufficient to cover future losses and add to concerns about NFIP’s financial stability. Further contributing to NFIP’s financial challenges, FEMA made a policy decision to allow certain properties remapped into riskier flood zones to keep their previous lower rates. Like subsidized rates, these “grandfathered” rates do not reflect the actual risk of flooding to the properties and do not generate sufficient premiums to cover expected losses. FEMA officials told us that the decision to grandfather rates was based on considerations of equity, ease of administration, and goals of promoting floodplain management. However, FEMA does not collect data on grandfathered properties or measure their financial impact on the program. As a result, it does not know how many such properties exist, their exact location, or the volume of losses they generate. As FEMA continues its efforts to modernize flood maps across the country, it has continued to face resistance from communities and homeowners when remapping places properties into higher-risk flood zones with higher rates. As a result, FEMA has often grandfathered in previous premium rates that are lower than the remapped rates. However, homeowners who are remapped into high-risk areas and do not currently have flood insurance may be required to purchase it at the full-risk rate. In reauthorizing NFIP in 2004, Congress noted that repetitive loss properties—those that have had two or more flood insurance claims payments of $1,000 or more over 10 years—constituted a significant drain on NFIP resources. These properties account for about 1 percent of all policies but are estimated to account for up to 30 percent of all NFIP losses. Not all repetitive loss properties are part of the subsidized property inventory, but a high proportion receive subsidized rates, further contributing to NFIP’s financial risks. While Congress has made efforts to target these properties, the number of repetitive loss properties has continued to grow, making them an ongoing challenge to NFIP’s financial stability. According to FEMA, expanded marketing efforts through its FloodSmart campaign have contributed to an increase in NFIP policies. This program was designed to educate and inform partners, stakeholders, property owners, and renters about insuring their homes and businesses against flood damage. Since the start of the FloodSmart campaign in 2004, NFIP has seen policy growth of more than 25 percent and as of February 2010 had 5.6 million policies in force. Moreover, despite the economic downturn, both policy sales and retention grew in 2009. Correspondingly, NFIP’s collected premiums have risen 28 percent since September 2006. This increase, combined with a relatively low loss experience in recent years, has enabled FEMA to make nearly $600 million in interest payments to Treasury with no additional borrowing since March 2009. FEMA has also adjusted its expense reimbursement formula. While these are all encouraging developments, FEMA is still unlikely to ever pay off its current $18.8 billion debt. We have identified a number of operational issues that affect NFIP, including weaknesses in FEMA’s oversight of WYO insurers and shortcomings in its oversight of other contractors, as well as new issues from ongoing work. For example, we found that FEMA does not systematically consider actual flood insurance expense information when determining the amount it pays WYO insurers for selling and servicing flood insurance policies and adjusting claims. Instead, FEMA has used proxies, such as average industry operating expenses for property insurance, to determine the rates at which it pays these insurers, even though their actual flood insurance expense information has been available since 1997. Because FEMA does not systematically consider these data when setting its payment rates, it cannot effectively estimate how much insurers are spending to carry out their obligations to FEMA. Further, FEMA does not compare the WYO insurers’ actual expenses to the payments they receive each year and thus cannot determine whether the payments are reasonable in terms of expenses and profits. When GAO compared payments FEMA made to six WYO insurers to their actual expenses for calendar years 2005 through 2007, we found that the payments exceeded actual expenses by $327.1 million, or 16.5 percent of total payments made. By considering actual expense information, FEMA could provide greater transparency and accountability over payments to WYO insurers and potentially save taxpayer money. FEMA also has not aligned its bonus structure for WYO insurers with NFIP goals such as increasing penetration in low-risk flood zones and among homeowners in all zones that do not have mortgages from federally regulated lenders. FEMA uses a broad-based distribution formula that primarily rewards companies that are new to NFIP and can relatively easily increase their percentage of net policies from a small base. We also found that most WYO insurers generally offered flood insurance when it was requested but did not strategically market the product as a primary insurance line. FEMA has set only one explicit marketing goal—to increase policy growth by 5 percent each year—and does not review the WYO insurers’ marketing plans. It therefore lacks the information needed to assess the effectiveness of either the WYO insurers’ efforts to increase participation or the bonus program itself. For example, FEMA does not know the extent to which sales increases may reflect external factors such as flood events or its own FloodSmart marketing campaign rather than any effort on the part of the insurers. Having intermediate targeted goals could also help expand program participation, and linking such goals directly to the bonus structure could help ensure that NFIP and WYO goals are in line with each other. Finally, FEMA has explicit financial control requirements and procedures for the WYO program but has not implemented all aspects of its Financial Control Plan. FEMA’s Financial Control Plan provides guidance for WYO insurers to help ensure compliance with the statutory requirements for NFIP. It contains several checks and balances to help ensure that taxpayers’ funds are spent appropriately. For an earlier report, we reviewed 10 WYO insurers and found that while FEMA performed most of the required biennial audits and underwriting and claims reviews required under the plan, it rarely or never implemented most of the required audits for cause, state insurance department audits, or marketing, litigation, and customer service operational reviews. In addition, FEMA did not systematically track the outcomes of the various audits, inspections, and reviews that it performed. We also found that multiple units had responsibility for helping ensure that WYO insurers complied with each component of the Financial Control Plan; that FEMA did not maintain a single, comprehensive monitoring system that would allow it to ensure compliance with all components of the plan; and that there was no centralized access to all of the documentation produced. Because FEMA does not implement all aspects of the Financial Control Plan, it cannot ensure that WYO insurers are fully complying with program requirements. In another review, we found that weak internal controls impaired FEMA’s ability to maintain effective transaction-level accountability with WYO insurers from fiscal years 2005 through 2007, a period that included the financial activity related to the 2005 Gulf Coast hurricanes. NFIP had limited assurance that its financial data for fiscal years 2005 to 2007 were accurate. This impaired data reliability resulted from weaknesses at all three levels of the NFIP transaction accountability and financial reporting process. At the WYO level, WYO insurer claims loss files did not include the documents necessary to support the claims, and some companies filed reports late, undermining the reliability of the data they did report. Second, contractor-level internal control activities were ineffective in verifying the accuracy of the data that WYO insurers submitted, such as names and addresses. Lastly, at the agency level, financial reporting process controls were not based on transaction-level data. Instead, FEMA relied primarily on summary data compiled using error-prone manual data entry. Also in a previous report, we pointed out that FEMA lacked records of monitoring activities for other contractors, inconsistently followed its procedures for monitoring these contractors, and did not coordinate contract monitoring responsibilities for the two major contracts we reviewed. At FEMA, a Contracting Officer’s Technical Representative (COTR) and staff (referred to as “monitors”) are responsible for, respectively, ensuring compliance with contract terms and regularly monitoring and reporting on the extent to which NFIP contractors meet standards in performance areas specified in the contracts. Internal control standards for the federal government state that records should be properly managed and maintained. But FEMA lacked records for the majority of the monitoring reports we requested and did not consistently follow the monitoring procedures for preparing, reviewing, and maintaining monitoring reports. Further, FEMA offices did not coordinate information and actions relating to contractors’ deficiencies and payments, and in some cases key officials were unaware of decisions that were made about contractors’ performance. In particular, our review of monitoring reports for one contract revealed a lack of coordination between the COTR and the contracting officer. As a result, FEMA could not ensure that the contractor had adhered to the contract’s requirements and lacked information critical to effective oversight of key NFIP data collection, reporting, and insurance functions. Given NFIP’s reliance on contractors, it is important that FEMA have in place adequate controls that are consistently applied to all contracts. Consistent with our findings in prior work, the DHS inspector general has also identified weaknesses in FEMA’s internal controls and financial reporting related to NFIP. To manage the flood policy and claims information that it obtains from insurance companies, NFIP’s Bureau and Statistical Agent (BSA) relies on a flood insurance management system from the 1980s that is difficult and costly to sustain and that does not adequately support NFIP’s mission needs. This system consists of over 70 interfaced applications that utilize monthly tape and batch submissions of policy and claims data from insurance companies. The system also provides limited access to NFIP data. Further, identifying and correcting errors in submission requires between 30 days and 6 months, and the general claims processing cycle itself is 2 to 3 months. To address the limitations of this system, NFIP launched a program in 2002 to acquire and implement a modernization and business improvement system, known as NextGen. As envisioned, NextGen was to accelerate updates to information obtained from insurance companies, identify errors before flood insurance policies went into effect, and enable FEMA to expedite business transactions and responses to NFIP claims when policyholders required urgent support. As such, the system would support the needs of a wide range of NFIP stakeholders, including FEMA headquarters and regional staff, WYO insurers, vendors, state hazard mitigation officers, and NFIP state coordinators. As part of our ongoing review of FEMA’s management of NFIP, preliminary results reveal that despite having invested roughly $40 million over 7 years, FEMA had yet to implement NextGen. Initial versions of NextGen were first deployed for operational use in May 2008. However, shortly thereafter system users reported major problems with the system, including significant data and processing errors. As a result, use of NextGen was halted, and the agency returned to relying exclusively on its mainframe-based legacy system while NextGen underwent additional testing. In late 2009, after this testing showed that the system did not meet user needs and was not ready to replace the legacy system, further development and deployment of NextGen was stopped, and FEMA’s Chief Information Officer began an evaluation to determine what, if anything, associated with the system could be salvaged. This evaluation is currently under way, and a date for completing it has yet to be established. Our ongoing review of FEMA’s management of NFIP includes identifying lessons learned about how NextGen was defined, developed, tested, and deployed, including weaknesses in requirements development and management, test management, risk management, executive oversight, and program office staffing that have collectively contributed to the program’s failure. In completing its evaluation and deciding how to proceed in meeting its policy and claims processing needs, FEMA could benefit from correcting these weaknesses. In the interim, the agency continues to rely on its outdated legacy system and thus does not have the kind of robust analytical support and information needed to help address the reasons that NFIP remains on GAO’s high-risk list of federal programs. To address the challenges NFIP faces, FEMA would have to address its own operational and management challenges. Further, legislative reform would be needed to address structural issues. However, as you know addressing many of these issues involves public policy trade-offs that would have to be made by Congress. Moreover, part of this process requires determining whether NFIP is or should be structured as an insurance program and how much liability the government can and is willing to accept. For example, if Congress wants to structure NFIP as an insurance company and limit borrowing from Treasury in future high- or catastrophic loss years, NFIP would have to build a capital surplus fund. Our prior work has shown that building such a fund would require charging premium rates that, in some cases, could be more than double or triple current rates and would take a number of years without catastrophic losses to implement. Additionally, while private insurers generally use reinsurance to hedge their risk of catastrophic losses, it is unclear whether the private reinsurance market would be willing to offer coverage to NFIP. In the absence of reinsurance and a surplus fund, Treasury will effectively continue to act as the reinsurer for NFIP and be the financial backstop for the program. Making premium rates more reflective of flood risk would require actions by FEMA and Congress. Because subsidized premium rates are required by law, addressing their associated costs would require congressional action. As previously reported, two potential options would be to eliminate or reduce the use of the subsidies over time or target them based on need. However, these options involve trade-offs. For example, eliminating or reducing the subsidies would help ensure that premium rates more accurately reflected the actual risk of loss and could encourage mitigation efforts. But the resulting higher premiums could lead some homeowners to discontinue or not purchase coverage, thus reducing participation in NFIP and potentially increasing the costs to taxpayers of providing disaster assistance in the event of a catastrophe. Targeting subsidies based on need is an approach used by other federal programs and could help ensure that those needing the subsidy would have access to it and retain their coverage. Unlike other agencies that provide—and are allocated funds for—traditional subsidies, NFIP does not receive an appropriation to pay for shortfalls in collected premiums caused by its subsidized rates. However, one option to maintain the subsidies but improve NFIP’s financial stability would be to rate all policies at the full- risk rate and to appropriate subsidies for qualified policyholders. In this way, the cost of such subsidies would be more transparent, and policyholders would be better informed of their flood risk. Depending on how such a program was implemented, NFIP might be able to charge more participants rates that more accurately reflected their risk of flooding. However, raising premium rates for some participants could also decrease program participation, and low-income property owners and renters could be discouraged from participating in NFIP if they were required to prove that they met the requirements for a subsidy. FEMA might also face challenges in implementing this option in the midst of other ongoing operational and management challenges. NFIP’s rate-setting process for full-risk premiums may not ensure that those premium rates reflect the actual risk of flooding and therefore may increase NFIP’s financial risk. Moreover, FEMA’s rate-setting process for subsidized properties depends, in part, on the accuracy of the full-risk rates, raising concerns about how subsidized rates are calculated as well. To address these concerns, we have identified actions that FEMA could take. For example, we recommended that FEMA take steps to help ensure that its rate-setting methods and the data it uses to set rates result in full- risk premium rates that accurately reflect the risk of losses from flooding. In particular, we pointed out that these steps should include verifying the accuracy of flood probabilities, damage estimates, and flood maps and reevaluating the practice of aggregating risks across zones. While FEMA disagreed with our analysis of its rate-setting methods, this area continues to warrant attention. Similarly, because NFIP allows grandfathered rates for those remapped into high-risk flood zones, it would also be in the position to address some of the challenges associated with this practice. FEMA could end grandfathered rates, but it decided to allow grandfathering after consulting with Congress, its oversight committees, and other stakeholders and considering issues of equity, fairness, and the goal of promoting floodplain management. We recommended that the agency take steps both to ensure that information was collected on the location, number, and losses associated with existing and newly created grandfathered properties in NFIP and to analyze the financial impact of these properties on the flood insurance program. With such information, FEMA and Congress will be better informed on the extent to which these rates contribute to NFIP’s financial challenges. Another statutory requirement that could be revisited is the 10-percent cap on rate increases. As with all the potential reform options, determining whether such action is warranted would necessitate weighing the law’s benefits—including limiting financial hardship to policyholders—against the benefits that increasing or removing such limits would provide to NFIP, Treasury, and ultimately the taxpayer. However, as long as caps on rate increases remain, FEMA will continue to face financial challenges. Solutions for addressing the impact of repetitive loss properties would also require action by both Congress and FEMA. For example, we have reported that one option for Congress would be to substantially expand mitigation efforts and target these efforts toward the highest-risk properties. Mitigation criteria could be made more stringent—for example, by requiring all insured properties that have filed two or more flood claims (even for small amounts) to mitigate, denying insurance to property owners who refuse or do not respond to a mitigation offer, or some combination of these approaches. While these actions would help reduce losses from flood damage and could ultimately limit costs to taxpayers by decreasing the number of subsidized properties, they would require increased funding for FEMA’s mitigation programs to elevate, relocate, or demolish the properties, would be costly to taxpayers, and could take years to complete. Congress could also consider changes to address loopholes in mitigation and repurchase requirements that allow policyholders to avoid mitigating by simply not responding to FEMA’s requests that they do so. FEMA could be required to either drop coverage for such properties or use eminent domain to seize them if owners failed to respond to FEMA’s mitigation requests. Moreover, Congress could streamline the various mitigation grant programs to make them more efficient and effective. Over the last several years, we have made many recommendations for actions that FEMA could take to improve its management of NFIP. FEMA has implemented some recommendations, including, among other things, introducing a statistically valid method for sampling flood insurance claims for review, establishing a regulatory appeals process for policyholders, and ensuring that WYO insurance agents meet minimum education and training requirements. FEMA has also taken steps to make analyzing the overall results of claims adjustments easier after future flood events. The efforts will help in determining the number and type of claims adjustment errors made and deciding whether new, cost-efficient methods for adjusting claims that were introduced after Hurricane Katrina are feasible to use after other flood events. However, as mentioned previously, many of our other previous recommendations have not yet been implemented. For example, we have recommended that FEMA: Address challenges to oversight of the WYO program, specifically the lack of transparency of and accountability for the payments FEMA makes to WYO insurers, by determining in advance the amounts built into the payment rates for estimated expenses and profit, annually analyzing the amounts of actual expenses and profit in relation to the estimated amounts used in setting payment rates, and by immediately reassessing the practice of paying WYO insurers an additional 1 percent of written premiums for operating expenses. Take steps to better oversee WYO insurers and ensure that they are in compliance with statutory requirements for NFIP and that taxpayers’ funds are spent appropriately by consistently following the Financial Control Plan and ensuring that each component is implemented; ensuring that any revised Financial Control Plan covers oversight of all functions of participating WYO insurers, including customer service and litigation expenses; systematically tracking insurance companies’ compliance with and performance under each component of the Financial Control Plan; and ensuring centralized access to all the audits, reviews, and data analyses performed for each participating insurance company under the Financial Control Plan. Improve NFIP’s transaction-level accountability and assure that financial reporting is accurate and that insurance company operations conform to program requirements by augmenting NFIP policies to require contractors to develop procedures for analyzing financial reports in relation to the transaction-level information that WYO insurers submit for statistical purposes; revising required internal control activities for contractors to provide for verifying and validating the reliability of WYO-reported financial information based on a review of a sample of the underlying transactions or events; and obtaining verification that these objectives have been met through independent audits of the WYO insurers. Address contract and management oversight issues that GAO has identified in previous reports, including determining the feasibility of integrating and streamlining numerous existing NFIP financial reporting processes to reduce the risk of errors inherent in the manual recording of accounting transactions into multiple systems; establishing and implementing procedures that require the review of available information, such as the results of biennial audits, operational reviews, and claim reinspections to determine whether the targeted audits for cause should be used; establishing and implementing procedures to schedule and conduct all required operational reviews within the prescribed 3-year period; and establishing and implementing procedures to select statistically representative samples of all claims as a basis for conducting reinspections of claims by general adjusters. Address challenges to oversight of contractor activities, including implementing processes to ensure that monitoring reports are submitted on time and systematically reviewed and maintained by the COTR and the Program Management Office; that staff clearly monitor each performance standard the contractor is required to meet in the specified time frames and clearly link monitoring reports and performance areas; that written guidance is implemented for all NFIP-related contracts on how to consistently handle the failure of a contractor to meet performance standards; that written policies and procedures are established governing coordination among FEMA officials and offices when addressing contractor deficiencies; and that financial disincentives are appropriately and consistently applied. Building on our prior work and these recommendations, we are in the process of conducting a comprehensive review of FEMA’s overall management of NFIP that could help FEMA develop a roadmap for identifying and addressing many of the root causes of its operational and management challenges. This review focuses on a wide range of internal management issues including acquisition, contractor oversight, information technology (NextGen), internal controls, human capital, budget and resources, records management, and financial management. While our work is ongoing, we have observed some positive developments in the agency’s willingness to begin to acknowledge its management issues and the need to address them. FEMA has also taken steps to improve our access to key NFIP staff and information by providing us with an on-site office at one of FEMA’s locations, facilitating our ability to access and review documents. As part of our past work, we have also evaluated other proposals related to NFIP. Each of those proposals has potential benefits as well as challenges. In a previous report, we discussed some of the challenges associated with implementing a combined federal flood and wind insurance program. While such a program could provide coverage for wind damage to those unable to obtain it in the private market and simplify the claims process for some property owners, it could also pose several challenges. For example, FEMA would need to determine wind hazard prevention standards, adapt existing programs to accommodate wind coverage, create a new rate-setting process, raise awareness of the program, enforce new building codes, and put staff and procedures in place. FEMA would also need to determine how to pay claims in years with catastrophic losses, develop a plan to respond to potential limited participation and adverse selection, and address other trade-offs, including delays in reimbursing participants, litigation, lapses in coverage, underinsured policyholders, and larger-than-expected losses. As we have previously reported, private business interruption coverage for flood damage is expensive and is generally purchased only by large companies. Adding business interruption insurance to NFIP could help small businesses obtain coverage that they could not obtain in the private market, but NFIP currently lacks resources and expertise in this area. Adding business interruption insurance could increase NFIP’s existing debt and potentially amplify its ongoing management and financial challenges. Insurers told us that underwriting this type of coverage, properly pricing the risk, and adjusting claims was complex. Finally, we have reported that creating a catastrophic loss fund to pay larger-than-average annual losses would be challenging, for several reasons. For example NFIP’s debt to Treasury would likely prevent NFIP from ever being able to contribute to such a fund. Further, such a fund might not eliminate NFIP’s need to borrow for larger-than-expected losses that occurred before the fund was fully financed. Building a fund could also require significant premium rate increases, potentially reducing participation in NFIP. FEMA faces a number of ongoing challenges in managing and administering NFIP that, if not addressed, will continue to work against improving the program’s long-term financial condition. As you well know, improving NFIP’s financial condition involves a set of highly complex, interrelated issues that are likely to involve many trade-offs and have no easy solutions, particularly when the solutions to problems involve balancing the goals of charging rates that reflect the full risk of flooding and encouraging broad participation in the program. In addition, addressing NFIP’s current challenges will require the cooperation and participation of many stakeholders. As we noted when placing NFIP on the high-risk list in 2006, comprehensive reform will likely be needed to address the financial challenges facing the program. In addressing these financial challenges, FEMA will also need to address a number of operational and management challenges before NFIP can be eligible for removal from the list. Our previous work has identified many of the necessary actions that FEMA should take, and preliminary observations from our ongoing work have revealed additional operational and management issues. By addressing both the financial challenges as well as the operational and management issues, NFIP will be in a much stronger position to achieve its goals and ultimately to reduce its burden on the taxpayer. Chairwoman Waters and Ranking Member Capito, this concludes my prepared statement. I would be pleased to respond to any of the questions you or other members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Orice Williams Brown at (202) 512-8678 or williamso@gao.gov. This statement was prepared under the direction of Patrick Ward. Key contributors were Tania Calhoun, Emily Chalmers, Nima Patel Edwards, Elena Epps, Christopher Forys, Randy Hite, Tonia Johnson, and Shamiah Kerney. Financial Management: Improvements Needed in National Flood Insurance Program’s Financial Controls and Oversight. GAO-10-66. Washington, D.C.: December 22, 2009. Flood Insurance: Opportunities Exist to Improve Oversight of the WYO Program. GAO-09-455. Washington, D.C.: August 21, 2009. Results-Oriented Management: Strengthening Key Practices at FEMA and Interior Could Promote Greater Use of Performance Information. GAO-09-676. Washington, D.C.: August 17, 2009. Information on Proposed Changes to the National Flood Insurance Program. GAO-09-420R. Washington, D.C.: February 27, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Flood Insurance: Options for Addressing the Financial Impact of Subsidized Premium Rates on the National Flood Insurance Program. GAO-09-20. Washington, D.C.: November 14, 2008. Flood Insurance: FEMA’s Rate-Setting Process Warrants Attention. GAO-09-12. Washington, D.C.: October 31, 2008. National Flood Insurance Program: Financial Challenges Underscore Need for Improved Oversight of Mitigation Programs and Key Contracts. GAO-08-437. Washington, D.C.: June 16, 2008. Natural Catastrophe Insurance: Analysis of a Proposed Combined Federal Flood and Wind Insurance Program. GAO-08-504. Washington, D.C.: April 25, 2008. National Flood Insurance Program: Greater Transparency and Oversight of Wind and Flood Damage Determinations Are Needed. GAO-08-28. Washington, D.C.: December 28, 2007. National Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Federal Emergency Management Agency: Ongoing Challenges Facing the National Flood Insurance Program. GAO-08-118T. Washington, D.C.: October 2, 2007. National Flood Insurance Program: FEMA’s Management and Oversight of Payments for Insurance Company Services Should Be Improved. GAO-07-1078. Washington, D.C.: September 5, 2007. National Flood Insurance Program: Preliminary Views on FEMA’s Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. Coastal Barrier Resources System: Status of Development That Has Occurred and Financial Assistance Provided by Federal Agencies. GAO-07-356. Washington, D.C.: March 19, 2007. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA’s Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Flood Insurance Program (NFIP), established in 1968, provides policyholders with insurance coverage for flood damage. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security is responsible for managing NFIP. Unprecedented losses from the 2005 hurricane season and NFIP's periodic need to borrow from the U.S. Treasury to pay flood insurance claims have raised concerns about the program's long-term financial solvency. Because of these concerns and NFIP's operational issues, NFIP has been on GAO's high-risk list since March 2006. As of April 2010, NFIP's debt to Treasury stood at $18.8 billion. The Subcommittee asked GAO to discuss (1) NFIP's financial challenges, (2) FEMA's operational and management challenges, and (3) actions needed to address these challenges. In preparing this statement, GAO relied on its past work on NFIP and GAO's ongoing review of FEMA's management of NFIP focused on information technology and contractor oversight issues. While Congress and FEMA intended that NFIP be funded with premiums collected from policyholders rather than with tax dollars, the program is, by design, not actuarially sound. NFIP cannot do some of the things that private insurers do to manage their risks. For example, NFIP is not structured to build a capital surplus, is likely unable to purchase reinsurance to cover catastrophic losses, cannot reject high-risk applicants, and is subject to statutory limits on rate increases. In addition, its premium rates do not reflect actual flood risk. For example, nearly one in four property owners pay subsidized rates, "full-risk" rates may not reflect the full risk of flooding, and NFIP allows "grandfathered" rates, which allow some property owners to continue paying rates that do not reflect reassessments of their properties' flood risk. Further, NFIP cannot deny insurance on the basis of frequent losses and thus provides policies for repetitive loss properties, which represent only 1 percent of policies but account for 25 to 30 percent of claims. NFIP's financial condition has improved slightly due to an increase in the number of policyholders and moderate flood losses, and since March 2009, FEMA has taken some encouraging steps toward improving its financial position, including making $600 million in interest payments to Treasury without increasing its borrowings. However, it is unlikely to pay off its full $18.8 billion debt, especially if it faces catastrophic loss years. Operational and management issues may also limit efforts to address NFIP's financial challenges and meet program goals. Payments to write-your-own (WYO) insurers, which are key to NFIP operations, represent one-third to two-thirds of the premiums collected. But FEMA does not systematically consider actual flood insurance expense information when calculating these payments and has not aligned its WYO bonus structure with NFIP goals or implemented all of its financial controls for the WYO program. GAO also found that FEMA did not consistently follow its procedures for monitoring non-WYO contractors or coordinate contract monitoring responsibilities among departments on some contracts. Some contract monitoring records were missing, and no system was in place that would allow departments to share information on contractor deficiencies. In ongoing GAO work examining FEMA's management of NFIP, some similar issues are emerging. For example, FEMA still lacks an effective system to manage flood insurance policy and claims data, despite investing roughly 7 years and $40 million on a new system whose development has been halted. However, FEMA has begun to acknowledge its management challenges and develop a plan of action. Addressing the financial challenges facing NFIP would likely require actions by both FEMA and Congress that involve trade-offs, and the challenges could be difficult to remedy. For example, reducing subsidies could increase collected premiums but reduce program participation. At the same time, FEMA must address its operational and management issues. GAO has recommended a number of actions that FEMA could take to improve NFIP operations, and ongoing work will likely identify additional issues. |
Social Security forms the foundation for our retirement income system. In 1998, it provided approximately $264 billion in annual benefits to 31 million workers and their dependents. However, the Social Security program is facing significant future financial challenges as a result of profound demographic changes, including the aging of the baby boom generation and increased life expectancy. In response, different groups and individuals have advanced numerous proposals that have called for the creation of some sort of mandatory or voluntary individual accounts. To better understand the potential implications of individual accounts, the Chairman of the House Committee on Ways and Means asked GAO to determine how individual accounts could affect private capital and annuities markets as well as national savings, the potential risks and returns to individuals, and the disclosure and educational information needed for public understanding and use of an individual account investment program. The Social Security program is not in long-term actuarial balance. That is, Social Security revenues are not expected to be sufficient to pay all benefit obligations from 1999 to 2073. Without a change in the current program, excess cash revenues from payroll and income taxes are expected to begin to decline substantially around 2008. Based on the Social Security Trustees latest “best estimate” projections, in 2014 the combined OASDI program will experience a negative cash flow that will accelerate in subsequent years. In addition, the combined OASDI trust funds are expected to be exhausted in 2034, and the estimated annual tax income will be enough to pay approximately 70 percent of benefits. Every year, Social Security’s Board of Trustees estimates the financial status of the program for the next 75 years using three sets of economic and demographic assumptions about the future. According to the Trustees’ intermediate set of these assumptions (or best estimate), the nation’s Social Security program will face both solvency and sustainability problems in the years ahead unless corrective actions are taken. Over the next 75 years, Social Security’s total shortfall is projected to be about $3 trillion in 1998 dollars. Social Security’s long-term financing problem is primarily caused by the aging of the U.S. population. As the baby boom generation retires, labor force growth is expected to slow dramatically. Beyond 2030, the overall population is expected to continue aging due to relatively low birth rates and increasing longevity. These demographic trends will require substantial changes in the Social Security benefits structure and/or revenues (i.e., taxes and/or investment returns). Without such changes, current Social Security tax revenues are expected to be insufficient to cover benefit payments in about 2014, less than 15 years from now. These trends in Social Security’s finances will place a significant burden on future workers and the economy. Without major policy changes, the relatively smaller workforce of tomorrow will bear the brunt of financing Social Security’s cash deficit. In addition, the future workforce also would likely be affected by any reduction in Social Security benefits or increased payroll taxes needed to resolve the program’s long-term financing shortfall. As a result, without timely actions, certain generations could face the twin blows of higher burdens and reduced benefits. Proposals have been advanced by different groups to reform Social Security through individual accounts. Such proposals basically also try to restore the Social Security program’s solvency and conserve its sustainability. In its report to the Social Security Commissioner, the 1994- 1996 Advisory Council on Social Security offered three alternative reform proposals, two of which would create individual accounts. The remaining proposal called for having the government invest the trust fund in financial assets, such as corporate equities. Numerous other proposals, also calling for individual accounts, have since been put forth by various organizations. Currently, therefore, there are a wide array of proposals that rely on some form of individual accounts. These proposals have in common the idea that to varying extents, individuals would manage their own individual accounts. The returns from these accounts would provide some or much of an individual’s future retirement income. Social Security is currently structured as a defined benefit program. The current Social Security program’s benefit structure is designed to address the twin goals of individual equity and income security—including retirement income adequacy. The basis of the benefit structure is that these twin goals, and the range of benefits Social Security provides, are currently combined within a single defined benefit formula. Under this defined benefit program, the worker’s retirement benefits are based on the lifetime record of earnings, not directly on the payroll tax he or she contributed. Alternatively, a number of individual account proposals introduce a defined contribution structure as an element of the Social Security program. A defined contribution approach to Social Security focuses on more directly linking a portion of the worker’s contributions to the retirement benefits that will be received. The worker’s contributions are invested in financial assets and earn market returns, and the accumulations in these accounts can then be used to provide income in retirement and an additional pre-retirement death benefit. One advantage of this approach is that the individual worker has more control over the account and more choice in how the account is invested. In essence, the defined contribution structure is similar to the current 401(k) or IRA systems. Some proposals combine defined contribution and defined benefit approaches into a two-tiered structure for Social Security. The aim is to maintain in some form the current existing system as a base tier and add an individual account component as a supplemental tier. Some proposals modify the existing benefit structure; and others propose features that provide guarantees of current law benefits or some other level, such as the poverty line. Other proposals have a more complicated formula including forms of matching. Thus, the relationship between contributions and benefits may be less direct. Under most of these proposals, individuals would receive part of their future benefits from a modified Social Security program and part from the accumulations from their individual account. Most of the individual account proposals seek to create investment accounts that to varying extents are managed by the participants themselves. However, the actual details of how to structure individual accounts vary by each proposal. Individual account proposals are usually framed by four characteristics: (1) carve-out versus add-on; (2) mandatory versus voluntary participation; (3) range of investment options offered; and (4) distribution options (e.g., required annuitization or lump-sum pay- out). The first characteristic pertains to whether to carve-out a portion of Social Security’s tax that is to be invested in financial assets or to add-on a percentage to the current tax that is to be invested in financial assets. OASDI has a payroll tax of 12.4 percent. A carve-out involves creating and funding individual accounts with a portion of the existing payroll tax. Thus, some portion of the 12.4 percent payroll tax, such as 2 percent, would be carved out of the existing Social Security cash flow and allocated to individual account investments. The resulting impact would be that revenues are taken out of Social Security and less is left to finance current benefits. Other proposals take a different approach and add-on individual accounts as a type of supplementary defined contribution tier. For instance, 2 percent would be added on to the current tax of 12.4 percent. The resulting effect of an add-on leaves the entire 12.4 percent payroll tax contribution available to finance the program while dedicating additional revenues for program financing either from higher payroll taxes and/or from general revenue. The second characteristic of individual account proposals concerns whether to make investments in individual accounts mandatory or voluntary. Mandatory participation in individual accounts would require that each individual invest some percentage of his or her payroll tax contribution in financial assets such as equities. Voluntary participation in individual accounts could allow individuals to opt in or opt out of investing any portion of their payroll tax contributions into financial assets. Individuals would rely on the existing Social Security if they chose to opt out of participating in individual accounts. Other voluntary approaches allow individuals to contribute with or without matching to a retirement account. Additionally, mandatory or voluntary can also refer to the pay- out an individual receives upon retirement, such as a pay-out in the form of a lump sum. The third characteristic has to do with the degree of choice and flexibility that individuals would have over investment options. Some proposals would allow unlimited investment choices, such as investments in corporate equities, bonds, or real estate. Other proposals would offer a more limited range of choices, such as equity or bond indexed funds. Thus, individual account investments offer individuals some range of choice over how to accumulate balances for their retirement. The final characteristic centers around how the accumulated earnings in individual accounts will be paid out. Preserving individual’s retirement income prior to pay-out by prohibiting pre-retirement distributions or loans is also a requirement of most proposals. However, upon pay-out, some proposals would permit requiring annuities--contracts that convert savings into income and provide periodic pay-outs for an agreed-upon span of time in return for a premium. Other proposals suggest allowing the individual to withdraw the account balance in lumpsum or through gradual pay-outs. Among the changes implementing individual accounts would make to the current Social Security program is to move away from a pay-as-you-go system in the direction of an advanced funded system. Social Security is currently financed largely on a pay-as-you-go basis. Under this type of financing structure, the payroll tax revenues collected from today’s workers are used to pay the benefits of today’s beneficiaries. Under a strict pay-as-you-go financing system, any excess of revenues over expenditures is credited to the program’s trust funds, which function as a contingency reserve. Advanced funding refers to building and maintaining total balances for Social Security, whether that is done through individual accounts or some other mechanism. Thus, although individual accounts are a form of advanced funding, the two terms are distinct. For instance, building up the balance in the Trust Funds is a form of advanced funding. The creation of individual accounts refers to a defined contribution system of accounts connected to Social Security and held in individuals’ names. Essentially, individual accounts would be advanced funded income arrangements similar to defined contribution plans or 401 (k) plans. Although privately held individual accounts are a widely discussed means to achieve advanced funding, there are other ways to achieve advanced funding. Another approach to advanced funding using private markets would have the government invest directly in private capital markets. Building up the Trust Fund using Treasury securities (marketable or nonmarketable) is another form of advanced funding, although it does not involve diversification gains. Proponents of individual accounts often state that advanced funding and asset diversification are benefits of their proposals. Yet, although advanced funding, individual accounts, and asset diversification are often linked, they are conceptually different. Diversification refers to investing in more than one asset and can be performed by individuals investing in individual accounts or by the government investing the trust fund in corporate equities stocks as well as corporate bonds. Any one of the three categories could change without changing the other. For instance, Social Security’s Trust Funds are currently invested in nonmarketable Treasuries. Allowing the Trust Funds to invest in assets other than Treasuries would be diversifying without introducing individual accounts. Alternatively, individual accounts could be introduced whereby individuals are allowed to invest in only one asset--thereby introducing individual accounts without diversifying. Whether advanced funding through individual accounts increases national saving is uncertain. The nation’s saving are composed of the private saving of individuals and businesses and the saving or dissaving of all levels of government. Supporters of advanced funding point out that individual accounts offer a way to increase national savings as well as investment and economic growth. Others suggest that the national saving claims of those favoring advanced funding through individual accounts may not be realized. Whether advanced funding through individual accounts increases national saving depends on a number of factors, including how individual accounts are financed (existing payroll tax, general revenues); how private saving responds to an individual account system; the structure of the individual account system (mandatory or voluntary), and the limitation or prohibition of pre-retirement distributions and loans to make sure retirement income is preserved. Furthermore, even if national saving increases as a result of individual accounts, individuals may or may not be better off. Saving involves giving up consumption today in exchange for increased consumption in the future. Some economists have stated that it is not necessarily the case that all increases in saving are worth the cost of foregone consumption. The Chairman of the House Committee on Ways and Means asked us to determine how individual accounts could affect (1) private capital and annuities markets as well as national savings, (2) potential returns and risks to individuals, and (3) the disclosure and educational information needed for public understanding and use of an individual account investment program. To determine the effect of individual accounts on the private capital and annuities markets, as wells as risk and return issues, we interviewed economists and other officials who were both proponents and opponents of individual accounts. These officials included officials from think tanks as well as academicians who have studied Social Security reform. We also reviewed and analyzed several studies relating to the impact of individual accounts on the market as well as studies that had tried to assess the risks and return issues that would arise because of individual accounts. We also analyzed data from the Federal Reserve Flow of Funds as well as data provided by the insurance industry. Additionally, we talked to industry officials from both the insurance and securities industries to obtain their views, and we interviewed government agency officials as. To determine the disclosure and educational requirements needed, we spoke to officials from the Securities and Exchange Commission (SEC), the Department of Labor’s (DOL) Pension and Welfare Benefits Administration (PWBA), the Pension Benefit Guaranty Corporation, and the Social Security Administration (SSA). We also spoke to private sector officials about the educational requirements that would be needed for an individual account program. Additionally, we reviewed various studies that have looked at the best ways to educate people about investment and retirement education. Because of the wide-ranging nature of the numerous proposals being advanced, our report focuses on the common, or generic, elements that underlie various proposals to reform Social Security financing rather than on a complete evaluation of specific proposals. We did our work in accordance with generally accepted government auditing standards between October 1998 and June 1999 in Washington, D.C., and New York, NY. We requested comments on a draft of this report from SSA, SEC, DOL, the Department of Treasury, and the Federal Reserve Board. SSA provided written comments that are included in appendix I. A discussion of these comments appears at the end of chapters 2 and 3. SSA and the other agencies also provided technical and clarifying comments, which we incorporated in this report where appropriate. Individual accounts can affect the capital markets in several ways depending on how the accounts are funded, how the funds are invested, how people adjust their own savings behavior in response to having individual accounts, and the restrictions placed on using funds in individual accounts for anything other than retirement income. Most of the proposals use either the Social Security cash flow or federal general revenues as a source of funds. As a result, the primary capital market effect is a purely financial one: borrowing in the Treasury debt market (or retiring less debt) to provide funding for investment in private debt and equity markets. Although the amounts involved are likely to be sizeable, the effect would primarily be one of redirecting funds and readjusting the composition of financial portfolios. There may also be some effect on the difference between the return on Treasury debt and that paid on riskier assets, although the effect is not likely to be large. Although substantial inflows into the private debt market could, in certain circumstances, result in some increased volatility, both the private equity and debt markets should be able to absorb the inflows without significant long-term disruption. There could eventually be a significant increase in the amount of new funds flowing into the annuities market. However, the magnitude of annuity purchases is likely to build gradually over time as more retirees build larger balances, allowing the market sufficient time to adjust. Another potential effect of individual accounts would be an increase or decrease in national savings—the overall level of domestic financial resources available in the economy for the purpose of investing in plant and equipment. Whether individual accounts would increase or decrease national savings depends on how they are financed, how private savings changes as a result of individual accounts, and whether there are restrictions on households’ ability to borrow. Most proposals use either the Social Security cash flow or federal general revenues as a source of funds for individual accounts. The funds raised are then to be invested in private equity or debt markets. As a result, there would be an increase in the relative supply of Treasury debt available to the public and an increase in the relative demand for private debt and equities to be held in individual accounts. This redirection of funds— selling Treasury debt for the cash to invest in private debt and equity—is a purely financial effect. It is likely to result in a change in the composition of private sector holdings as businesses and households absorb the extra government debt and provide new or existing private debt and equity, thereby adjusting their portfolios. Whether the resources for individual accounts come from Social Security contributions or general revenues, the level of government debt held by the public would increase, or not fall as much as it otherwise would. The only cases in which an increase in debt held by the public would not occur would be those in which the resources come from an additional source of funding—either a tax increase, an expenditure reduction, or the result of some voluntary private saving—that would not otherwise have occurred. Increased government borrowing from the public could put some upward pressure on the interest rate at which the government borrows, if private sector borrowers are to be persuaded to hold the increased supply of government debt. Funds diverted to private equity and debt markets could have the effect of raising the prices and therefore lowering the yields (rates of return) on these higher risk assets. The combined effect could narrow somewhat the difference between the more risky and least risky assets. Whether resources used to finance individual accounts come from new revenues, additional borrowing, or surpluses, the amounts flowing into private capital markets are likely to be substantial. Funding of individual accounts will come directly or indirectly from increased government borrowing from private markets, unless funded by a tax increase or spending reduction. To fund most individual account proposals, the government would need to raise resources either by borrowing in the market or—under a surplus scenario—by not retiring as much maturing debt as it otherwise would. For certain proposals, changes in borrowing may not arise because these proposals rely on a tax increase or benefit reduction so that current cash flow is not affected. If the source of funding for individual accounts is a carve-out from the current Social Security cash flow, this loss in cash flow would have to be made up from increased borrowing, a reduction in benefits, or some other program change. Alternatively, if the source of funding is general revenues, either additional borrowing from the public or less debt retired will be necessary depending on whether the overall budget is in deficit or surplus. Only if the government raises taxes or reduces spending, and uses those revenues to finance individual accounts, is there not likely to be any effect on borrowing because the remaining cash flow would not be affected. The uses of the funding for individual accounts will depend on the options available to investors and the choices they make within those options. To the extent that investors choose to invest in Treasury debt, there is that much less flowing into private capital markets, and any effects on those markets would be reduced. However, investors or their agents are likely to put at least some, if not most, of the funds into the private equity or debt market, and some proposals call for all of the funds to be invested in private markets. The size of this potential flow of funds into the private sector depends on whether individual account investments are mandatory or voluntary as well as the percentage of payroll that forms the basis for the program. The actual amounts allocated to private equity and debt will depend upon individual choice to the extent such choice is allowed, or on selected percentages if those are set by law. The initial annual dollar amount flowing into the capital markets as a result of individual account investments could be about $70 billion (2 percent of payroll) in 1998 dollars. According to our analysis of Social Security Administration (SSA) data, the effective taxable payroll for all working individuals will steadily increase well into the future. As a result, the annual dollar amount from individual account investments is likely to increase. For instance, our analysis of SSA data indicates that in the year 2020, the effective taxable payroll will be almost $11 trillion. On the basis of that dollar amount, if 2 percent is the designated percentage, the amount flowing into the private equity and debt markets from individual accounts would be about $220 billion in the year 2020. U.S. capital markets are the largest and most liquid in the world. The total market value of U.S. equities outstanding at the end of 1998 was about $15 trillion. The total value of corporate bonds outstanding in the United States was about $4 trillion at the end of 1998. The amount of Treasury debt outstanding was also about $4 trillion. As shown in table 2.1, the amounts outstanding for corporate equities and corporate bonds have been increasing. For instance, in 1997 there was about $13 trillion in equities outstanding, up from $10 trillion in 1996. The amounts outstanding for corporate bonds has increased from about $3 billion in 1996 to about $4 billion in 1998. On the basis of the current size of the corporate equity and bond markets, the amount representing individual accounts is likely to be a small percentage of private capital markets, at least for a number of years. For instance, using a payroll percentage of 2 percent, if $70 billion were to come from individual accounts, it would represent less than 0.5 percent of the $15 trillion in equity outstanding in 1998 and less than 2 percent of the $4 trillion in corporate bonds outstanding in 1998. Various officials have expressed concern that over time, individual account investments would represent significant portions of the corporate equities and bond markets. It is likely that investments from individual accounts could eventually rival current holdings of other major sectors of the market and represent a sizeable portion of equity and corporate bond holdings. For instance, if 2 percent of payroll is placed in individual accounts annually, SSA estimates that stock holdings in individual accounts could grow to between $1 trillion and $2 trillion in 1996 dollars over the next 15 years. The overall market will grow at about the market rate of return, although individual components may grow faster or slower depending on strategies and relative demands by mutual funds, pension plans, and other investors. For instance, as shown in table 2.2, the total value of equity holdings of mutual funds was $2.5 trillion in 1998, and the total value of corporate and foreign bond holdings was about $339 billion. The holdings of various sectors, such as private pension plans, were about $2.2 trillion of equities and about $301 billion of corporate bonds in 1998. Thus, although individual account holdings are likely to increase over time, the holdings of many other sectors of the economy are also likely to rise, although certain individual sectors may not. In general, it is difficult to predict how rapidly the sum of these sectors holdings will grow, especially in the presence of individual accounts. Even if the annual flows from individual accounts into private capital markets were a small percentage of the total market value of outstanding debt and equities, these amounts could still represent a substantial increase in the annual flows into those markets. The actual amounts will depend on the options available to individuals as well as the choices they make. If a large percentage of funds from individual accounts flowed into the equity markets, it could represent an increase of approximately 15 to 20 percent in the flow of funds into and out of the equity market, according to data from the Federal Reserve Flow of Funds. It is not clear that such an increase would have much effect on the pricing, or volatility, of the equity markets. However, the corporate bond market, which is smaller, could be affected, at least in the short term, depending on how much of the funds flow into the market and, to some extent, on the timing of those flows. Most U.S. equities markets are very liquid—it is easy for investors to buy and sell equities without moving the price. Various sectors of the economy, such as the household sector, mutual funds, private pension plans, and life insurance companies, purchase and sell equities every day. The equities market is a secondary market in which much of the transaction volume and value reflects movement of equities between purchasers and sellers. The annual net purchases can be positive or negative, reflecting the difference between the value of new equities issued and the value of equities repurchased; however, the amounts purchased and sold by specific sectors can be quite large. For instance, the annual net purchases of equities were minus $3 billion in 1996, minus $79 billion in 1997, and minus $178 billion in 1998. As can be seen in table 2.3, the three largest purchasers bought in the range of $300 billion in securities each year from 1996 to 1998. In terms of sellers, the household sector sold almost $300 billion in 1996 and about a half of a trillion dollars in both 1997 and 1998. Annual flows within the equities market were in the hundreds of billions of dollars between 1996 and 1998. Over that period, mutual funds, life insurance companies, and state and local government retirement plans were the primary purchasers, and private pension plans and households were the major sellers of equities. Compared to these annual amounts, an additional tens of billions of dollars generated by individual accounts is not likely to cause major disruptions and could potentially be absorbed without significant price or volatility effects. There is a greater chance of some possible disruption, however, if all of the individual account funds were to flow in at once rather than regularly, but not too predictably, over the course of the year. For instance, $70 billion distributed evenly over the year would be unlikely to cause much disruption. However, concentrating that same flow into one quarter of the year could have some short-term effect on the market because it would represent a substantial increase in quarterly flows. As a result, to minimize the likelihood of disruption, it would make sense, to the extent practicable, to smooth out the inflows so that they do not all come into the market within a short time period. If the inflows are lumpy and predictable, the market may be able to anticipate the inflows and adjust prices somewhat, which could mean that individual account purchases would pay slightly higher prices than they otherwise would. The corporate debt markets are not as transparent as the corporate equities markets; for example, there are no central listings for the prices of the bonds or the volume of corporate bonds sold. They also do not have as much depth as the equities markets—there are fewer buyers and sellers in the corporate bond markets. Many corporate bond transactions are done through private placements; i.e., they are not offered to the corporate debt market as a whole. The result is a market with less liquidity reflected in a greater spread between the bid price (what you will pay for the bond) and the ask price (the price at which you would sell the bond). As stated previously, the value of outstanding corporate debt is substantially less than the market value of corporate equities. On an annual flow basis, corporate debt issues have been running in the hundreds of billions of dollars over the last decade. However, some proportion of that is short term (less than 1 year in maturity) so that the total is not easily comparable to the annual amounts of equities purchased and sold. As shown in table 2.4, the annual net purchases of corporate bonds by various sectors ranged from as low as $17 billion for state and local government retirement plans of in 1996 to as high as $79 billion for life insurance companies in 1996. On the basis of annual flows, it is difficult to say what the effect on the bond market is likely to be. However, if we compare the corporate bond and equity markets, we can draw some tentative conclusions about the likelihood of individual accounts having a disruptive effect on either market. The corporate bond market is relatively smaller and less liquid than the equity market. As a result, an inflow into the bond market is more likely to affect the market price and the volatility of the market, compared to an equivalent inflow into the equity market, especially if it is concentrated in a short period of time. Any disruption is still likely to be short term in nature and can be mitigated if the inflow is spread over time, so that other market participants are less able to predict the inflows and raise prices in anticipation of the inflow. Although there are various types of Treasury debt, the overall market for U.S. Treasuries is far more liquid and transparent than the corporate bond market. A large secondary market—in which Treasury securities are bought and sold subsequent to original issuance—exists for Treasuries and helps to make it one of the most liquid markets in the world. Annual net purchases of Treasuries were $23 billion in 1997 and minus $55 billion in 1998. The effect on the Treasury debt market from a movement to individual accounts will depend not only on the choices available to individuals but also on the extent to which the government borrows from the private capital markets to fund individual accounts. As stated previously, to fund any individual account proposal that does not increase Social Security contributions, the government would need to raise resources either by borrowing in the market or by not retiring as much maturing debt as it otherwise would. The Treasuries market, therefore, could be affected in two ways: (1) by how much the government borrows to fund individual accounts, and (2) by how much individuals choose to invest in Treasuries. However, the depth and liquidity of the Treasury debt market is such that the market is unlikely to be significantly disrupted even by a large flow of funds resulting from individual accounts. Annuities protect against the possibility of outliving one’s financial resources by guaranteeing a stream of income for the remainder of one’s life, regardless of how long that may be. Annuities basically convert savings into income and may be sold individually or as a group product. In a group annuity a pension plan provides annuities at retirement to a group of people under a master contract. It usually is issued by an insurance company to an employer plan for the benefit of employees. The individual members of the group hold certificates as evidence of their annuities. Depending on the structure of individual accounts, individuals may be required to purchase individual annuities or, similar to pension and other retirement plans, fall under a group annuity. One measure of the size of the annuities market is the level of the insurance industry’s policy reserves—the sum of all insurers’ obligations to their customers arising from annuity contracts outstanding. Each company is required by state insurance regulators to maintain its policy reserves at a level that will ensure payment of all policy obligations as they fall due. As shown in table 2.5, policy reserves for individual annuities were about $693 billion and for group annuities about $762 billion. Insurance industry officials told us that the annuities industry is likely to be able to absorb the flows from either mandatory or voluntary annuitization. Once again, we are talking about a movement of financial resources from one form to another rather than a new source of funds. The funds will be moved out of whatever investment instruments (assets) workers were using for accumulation purposes into a potentially different combination of assets held by companies supplying annuities. Insurance industry officials believe that, generally, annuities resulting from the liquidation of the individual accounts would be phased in gradually and over a number of decades. In the early years, few if any retirees would have built up substantial individual account balances. As time passes, both the number of retirees with individual account balances and the average size of those balances would gradually increase, allowing the industry and the market time to adjust without difficulty. One issue raised by insurance industry officials was that an individual account proposal that made annuity purchases mandatory at retirement could result in the demand for a significant number of very small annuities. For instance, at least initially, there would be many small accounts below $2,000. Currently, annuity purchases average about $100,000. Although the industry could absorb a significant number of small accounts, industry officials said that providing annuities that small could be uneconomical for the industry because the cost of issuing a monthly check, and other administrative costs, would be prohibitive. Although the financial effects of individual accounts are an important consideration, a related but somewhat separate issue is the potential for individual accounts to increase or decrease national saving. Along with borrowing from abroad, national savings provides the resources for private investment in plant and equipment. The primary way in which a movement to individual accounts could change the overall capacity of the economy to produce goods and services would be if individual accounts were to lead to a change in the overall level of national saving. The extent to which individual accounts affect national saving depends on how they are financed (existing payroll tax, general revenues)—the effect on government saving; how private savings—the savings of households and businesses—respond to an individual account system; the structure of the individual account system (mandatory or voluntary); and the limitation or prohibition of the pre-retirement distribution or loans to make sure retirement income is preserved. One important determinant of the effect of individual accounts on national savings is the funding source. There are several possible funding sources, although most involve a movement of funds from or through the federal government and each has its own effects on the federal government’s portion of national saving. For some funding sources these savings effects are clearer than others. As previously stated, the funds can come from (1) within the current Social Security system, i.e., the surplus or current cash flows; (2) a change in the system resulting from increased payroll taxes or reduced benefits; or (3) outside the system using a general fund surplus or general revenues. Using either the Social Security surplus or more generally the current Social Security cash flow is likely to reduce government saving. If part of the cash flow is diverted to individual accounts but there is no change in the benefits paid or the taxes collected, the lost cash flow will either result in a smaller addition to the surplus or be replaced by borrowing. In either case the result is a reduction in the measured government surplus—the sum of the Social Security surplus and the general fund surplus—or an increase in the deficit. From the government’s perspective, its saving has gone down to provide the resources for increased personal savings through individual accounts. This is a case of a carve-out from Social Security. If the resources for individual accounts are financed by additional Social Security taxes or reduced benefits instead, there will be no direct effect on government savings. The increased outlays for individual accounts will be offset by higher government revenues or lower government benefit payments. In the absence of other changes in Social Security cash flows, government savings remain constant, and any increase in private saving would be an increase in national saving. This is a case of an add-on to both Social Security and to the overall government budget. The most complicated case involves the use of funds that are outside of the Social Security system but part of the overall government budget. There are proposals to use the overall budget surplus or general government revenues as a source of funds for individual accounts. Although on its face this appears to reduce government savings by the amount diverted, the actual effect on government savings depends on what would have been done with the surplus or revenue if it had not been used to finance individual accounts. For example, if the resources would have been used to finance additional government spending, and the diversion of the funds to individual accounts means that such spending is not undertaken, government saving would not be reduced by individual accounts. In this case, any increase in private saving would be an increase in national saving. Similarly, if the resources would have been used to finance a tax cut, then diverting funds to individual accounts does not directly reduce government savings if the tax cut is not undertaken. In the case of a tax cut, national saving will go up if individual accounts generate more private saving than the tax cut. If the funds would have been used to pay down debt, the direct effect of diverting those resources to individual accounts would be to reduce government saving. The full effect on national saving depends on the extent to which individuals adjust their own savings behavior. If they do not adjust, national saving is on balance unaffected. To the extent individuals or businesses reduce their saving, national saving will fall. The effects of various individual account proposals on national saving depend not only on how the proposals affect government savings but also on how private savings behavior will respond to such an approach. Regardless of the financing source, the effect of individual accounts will be to raise, at least to some extent, the level of personal or household saving unless households fully anticipate and offset through a reduction in their own saving. For example, a carve-out from the existing Social Security cash flow would provide funding for individual accounts for everyone (under a mandatory approach) or for those who wished to participate (under a voluntary approach). Such a carve-out is likely to reduce government saving and raise private saving by an equivalent amount in the absence of any behavioral effects. If households are forgoing current consumption by saving for their retirement, then, in response to this potential increase in future retirement benefits, they may reduce, to a greater or lesser extent and in various ways, their own savings, including retirement saving. To the extent that household responses lead to reduced personal saving, national savings as a whole would fall under a carve-out. In general, the result would be similar under any proposal that reduced government saving to fund private saving through individual accounts. This includes proposals that use general revenues that would have been saved by the government; i.e., used to reduce the deficit or retire debt outstanding. The overall level of consumption in the economy is not likely to change as a result of the movement of funds. Any significant change in the level of consumption resulting from such proposals would result from some households reducing their retirement savings to fund consumption because they now had individual accounts. The extent of these behavioral effects will depend on the structure of the program and any limitations that are placed on the use of funds in individual accounts, such as restrictions on preretirement withdrawals. If such a program is mandatory rather than voluntary, it is more likely to affect those households who currently either do not save or do not save as much as the amounts in their individual accounts. A mandatory program would increase savings for those who do not usually save, who are usually low-income people. Household behavior in response to individual accounts will depend on the extent that the household is currently saving for retirement and how the set of options available to households is changed by the presence of individual accounts. One group of households, those that are currently saving as much as they choose for retirement, given their income and wealth, would probably reduce their own saving in the presence of individual accounts. For those households for whom individual accounts closely resemble 401(k)s and IRAs, a shift to individual accounts might lead them to decrease their use of these accounts. They would have additional retirement income possibilities available and might choose to reduce their retirement or other saving to use for consumption in the present rather than in the future. However, unless they were target savers, i.e., savers who were trying to reach a specific retirement income goal, they might not reduce their other savings dollar for dollar with individual accounts. Therefore, we might expect some reduced saving by a significant number of households; for certain households, we might expect a substantial reduction. Under a voluntary approach, the households that are most likely to participate are those households that are currently saving but that face some constraint in terms of the type of retirement saving they can do or the amount of tax-preferred saving they are allowed. For example, someone whose employer offered only a defined benefit retirement plan or a defined contribution plan with very limited options might find that voluntary individual accounts offered a new opportunity. In addition, someone who was already contributing as much as he or she was legally allowed to tax-deferred savings would find a voluntary program attractive if it allowed an additional amount of tax-deferred saving. These and others who take advantage of a voluntary program may be more likely to reduce other forms of saving in response. Households that are currently not saving, either because they are resource constrained or because they are not forward-looking, would be forced to save some amount by a mandatory individual account system. Households in such situations may welcome the additional resources, especially if they do not come from a direct reduction in their own consumption. However, such households may also try to transform some of the additional resources into consumption if they are able to borrow from the accounts or otherwise tap into the accounts before retirement. To maintain retirement income adequacy and to keep savings from being dissipated, it may be necessary to prohibit or restrict borrowing or other methods of drawing down individual accounts prior to retirement. Even with such restrictions, it may not be possible to completely eliminate all options that households could use to indirectly increase consumption from individual accounts. For example, households with little or no retirement saving or other financial wealth could have wealth in some other form, such as home equity. It is conceivable that such households could borrow against that home equity as a way of turning their increased future consumption into present consumption. In addition to the effects of individual accounts on household savings there are also other potential indirect effects on private saving. For example, the incentives for employers to provide retirement benefits, either through defined benefit or defined contribution plans, could be affected by individual accounts. In addition, if less compensated workers in a defined contribution plan reduce their contributions to the plan, higher compensated workers may be required to reduce their own contributions under the antidiscrimination rules. Offsetting these tendencies to reduce saving, however, there are some economists who believe that individual accounts might encourage certain individuals to save more for retirement and thus not reduce their current savings. Such an effect is more likely to be present if there is some form of matching by the government as part of the individual account proposal. Others believe that to the extent that a lack of saving is based on people not taking a long enough view, the presence of individual accounts and watching them accumulate could give people a better sense of how saving small amounts can add up over time. This, plus observing how compounding works, could induce some to save who otherwise would not. National saving is more likely to be increased by some approaches to individual accounts than by others. Using sources of government funding that would more likely have resulted in spending rather than saving decreases the likelihood that government saving would be reduced. Proposals that are mandatory are more likely to increase private saving because a mandatory program would require that all individuals, including those who do not currently save, place some amount in an individual account. Certain prohibitions or restrictions on borrowing or other forms of preretirement distributions would also limit the ability of some households to reduce their savings in response to individual accounts. SSA commented that we needed to discuss the savings implications of the President’s proposal. This report was not intended to comment on specific reform proposals. There is a risk/return trade-off for individuals under an individual account program; instituting such a program would likely raise both the risks and the returns available to participants compared to the current system. In order to receive higher returns, individuals would have to invest in higher risk investments. The return that individuals receive would depend on both their investment choices and the performance of the market. Individuals who earn the same wages and salaries and make the same contributions to Social Security could have different retirement incomes because of the composition of their portfolios and market fluctuations. As with any investment program, diversification and asset allocation could reduce the risks while still allowing an individual to earn potentially higher returns. Most advocates of individual accounts state that the expected return on investments under an individual account program would be much higher for individuals than the return under the current Social Security program. Proponents of individual accounts usually point out that equities have historically substantially yielded higher returns than U.S. Treasuries, and they expect this trend to continue. Others are skeptical about the claims for a continuation of such a high expected return on equities. They state that history may not be a good predictor of the future and that the expected premium generated by investing in equities has steadily been declining. Furthermore, they state that even if expected equity returns are higher than other investments, equity returns are risky. Thus, in order to determine what returns individuals might expect to receive on their individual account investments, the riskiness of the investment should be taken into account. Adjusting returns to include risks is important, but there are many ways to do this, and no clearly best way. Lastly, comparing the implicit rate of return that individuals receive on their Social Security contributions to expected rates of return on market investments may not be an appropriate comparison for measuring whether individuals will fare better under an individual account system. Such comparisons do not include all the costs implied by a program of individual accounts. In particular, the returns individuals would effectively enjoy under individual accounts would depend on how the costs of the current system are paid off. Rates of return would also depend on how administrative and annuity costs affect actual retirement incomes. An individual account program would offer individuals the opportunity to earn market returns that are higher than the implicit returns to payroll under the current Social Security program. However, investing in private sector assets through individual accounts involves a clear trade-off-- greater return but more risk or more variability in future rates of return. Under the current Social Security program, risks are borne collectively by the government. Moving to an individual account program would mean that individuals reap the rewards of their own investments, but they also incur risk—not only about future returns, but also the possibility of losing money and even having inadequate income for retirement. However, holding assets for the long term, diversification, and the proper asset allocation can mitigate certain risks and improve an individual’s risk/return trade-off. A trade-off exists between risk and return in investments. If an individual is willing to consider the possibility of taking on some risk, there is the potential reward of higher expected returns. The capital markets offer a wide variety of investment opportunities with widely varying rates of return, which reflect variations in the riskiness of those investments. For instance, Treasury Bills are considered to be relatively risk free because they have almost no default risk and very little price risk. Alternatively, equities are considered to be relatively risky because the rate of return is uncertain. Because debt holders are paid out of company income before stockholders, equity returns are more variable than bonds. Overall, annual returns on equities are more volatile than returns on corporate bonds or Treasuries. On a long-term average basis, the market compensates for this greater risk by offering higher average returns on equities than on less risky investments. Thus, among the three types of investments, corporate equities are the riskiest investments but pay the highest returns, followed by corporate debt and then Treasuries. However, holding riskier investments such as equities over long periods of time can substantially diminish the risk of such investments. The degree of risk and the size of potentially higher returns with individual accounts depend on the equities chosen as well as the performance of the market. A stock’s value is tied to the expected performance of the issuing company. If the company does well, investing in individual equities could be very lucrative for investors. However, if the company does poorly, investing in individual equities could result in low returns or losses to the investor. Many financial analysts go through intensive research to try and pick the best stocks. Choosing the right stock, however, can be mostly a matter of a “random walk.” Individuals may mitigate the risk of holding equities and bonds by diversifying their portfolios and allocating their investments to adjust their risk exposure and to reflect their own risk tolerance and circumstances. Ultimately, the composition of an individual portfolio, along with the performance of the market, determines the return individuals receive and the risk they bear. In constructing a portfolio investors combine equities and bonds and other “securities” in such a way as to meet their preferences and needs, especially their tolerance for risk. Individuals manage their portfolios by monitoring the performance of the portfolios and evaluating them compared to their preferences and needs. Many people have been managing portfolios for years. There are, however, many others who either do not have portfolios or do not consider what they have as a portfolio. With individual accounts, all individuals would eventually have to manage their portfolios as they start to own various investments, especially if they have options over individual securities or types of securities. A well-diversified portfolio could help to diminish risk without lowering the return, thereby improving the risk/return trade-off. For instance, a properly selected combination of risky assets can have a lower risk than any of its individual assets because the risk is spread out among different assets allowing for gains in some assets to offset losses in others. Such portfolios could provide higher average returns over the long term than a single asset with equal risk. Furthermore, diversifying an equity portfolio across companies and industries reduces both default and concentration risk and reduces the likelihood that a portfolio’s return will vary widely from the expected market return. In order to quantify the diversification of a portfolio, concepts like correlation and covariance are used to measure how much the returns on assets move in tandem with one another. For instance, if annual returns on different investments are not very correlated, their risks can offset each other even though they still individually earn higher average returns. Such techniques, however, are very sophisticated, require substantial data analysis, and would require the help of professional advisors for the average investor. However, there are ways for individuals to take advantage of many of the benefits of diversification without needing to calculate correlation and covariance measures. Indexing is one way to broadly diversify an equity portfolio and to match the approximate market return. Typically, investing in broad-based stock indexes such as the Standard & Poor’s 500 index—which represents about two-thirds of the value of the U.S. stock market—diversifies an individual’s portfolio by reducing the likelihood of concentrating investments in specific companies. Such investments also tend to reduce turnover and lower administrative costs because they do not involve as much research or expensive investment advice. A diversified stock portfolio, however, does not protect against the risk of a general stock market downturn. One way to mitigate U.S. stock market risk is to diversify into international markets. An investor can also shield against general stock market risk by diversifying into other types of assets, such as corporate bonds. To minimize exposure to short-term stock market fluctuations, an investor can hold less risky, albeit lower yielding, assets to cover liquidity needs in the short run. Asset allocation can provide an approach to portfolio diversification. For example, percentages can be allocated to equities (including indexes), bonds, and Treasuries. These allocations will generally reflect preferences for risk as well as an individual’s life-cycle phase. Those with a higher tolerance for risk and those who are younger would generally invest more in equities. Those in later life-cycle phases might invest more in bonds or Treasuries. The primary risk that individuals would face with diversified or indexed individual account investments is “market risk,” the possibility of financial loss caused by adverse market movements. When the stock market drops, prices of some equities fall and can stay depressed for a prolonged period of time. Although a long investment time horizon provides the individual more time to recover from short-term fluctuations, an individual also would have more time to encounter a prolonged stock market downturn. Thus, although long periods of time can help mitigate the effects of market risk, it does not disappear over time. Under most individual account programs, individuals would bear much if not all of the market risk. Although market risk would not increase with the introduction of an individual account program, more people would be exposed to it under an individual account program than are under the current Social Security system. Some individuals would do very well under such an individual account program, but others may not do as well and could experience a significant drop in their expected retirement income compared to others in the same age group or to the current Social Security program. Furthermore, those who are reluctant to invest in the stock market may not benefit from the potentially higher returns of equity investing. Thus, the investment choices individuals make, as well as the performance of the market, would determine the return they would receive under an individual account program. Individuals who retire at the same time may receive different pay-outs from individual account investments because of the choices they have made. Although some individuals could make the same choices, individuals are more likely to make different choices. In part, differences may come about due to luck; other differences may be more systematic. For instance, higher income people may be willing to take on more risk— and possibly earn higher returns—than lower income people. For this reason, higher income individuals could earn higher rates of return than lower income individuals under an individual account program, which is not the case under the current Social Security program. Many programs also provide for a default option for those who do not wish to take an active part in investing in individual accounts. One type of default option would provide investments in Treasuries with very low risk and a low return. Others could provide an asset allocation, possibly age related, with more equities included for younger workers and more Treasuries for older workers. Returns could vary across cohorts as well under an individual account program. Even if some cohorts made the same choices, given the volatility of the stock market, the returns could vary substantially across different time periods and affect cohorts differently. For instance, even if the market experienced no dramatic or long-lasting downturns, the market will create “winners” and “losers” depending on when and how individuals invest their individual account investments and when they liquidate their holdings. As long as workers are aware of and accept the idea that returns may vary across individuals as well as cohorts, there will probably not be calls to fix the “unfair benefits outcomes.” However, if large differences in outcomes become commonplace, many participants could become dissatisfied with the program and demand some payment from the government to make up for any losses they incur or even if substantial differences result. For instance, those that have incurred losses may expect the government to mitigate their losses when they do not receive the return they believe they were led to expect. Furthermore, individual accounts are at least in part an attempt to finance the unfunded liability with the excess returns of equities over nonmarketable Treasuries. To the extent that individuals receive low or even negative returns over time, individual account investments could actually lead to an increase in the unfunded liability of the current Social Security program. The expected return from investments of individual accounts is likely to be higher than the average implicit rate of return of the current system, but it is unlikely to be as high as many advocates presume. Advocates and opponents of individual accounts have estimated what the likely market return would be for an individual’s investments under an individual account program. When discussing equity returns, advocates often point to the fact that equities have historically yielded higher returns than Treasuries. They expect returns on equities to continue to be higher than Treasuries and to boost individual returns on individual account investments. Other economists are skeptical that the higher returns presumed under an individual account program will be realized. They state that history may not be a good predictor of the future. Others state that even if expected equity returns are higher than other investments, equity returns are risky. For instance, the average historical return reveals nothing about how variable that return has been from year to year. Thus, in an estimation of an expected return to investments of individual accounts, the riskiness of the investment should be taken into account. Estimating expected returns without mention of the risk and costs of the investments will overstate the benefits of investing in marketable securities because the return on marketable securities varies substantially with the riskiness of those investments. Advocates of individual accounts have stated that individuals would receive higher returns by investing in the stock market than they receive under the current Social Security program. Although,comparing investment returns with the rate of return paid by Social Security is always problematic, advocates of individual accounts point out that the rate of return on equities has been significantly higher than other rates of returns. For instance, compounded annual average rates of return on equities have averaged about 7 percent per year since 1900 and 6 percent per year since 1957. Alternatively, the compounded annual average return on Treasuries has been between 1 and 2 percent per year on an inflation-adjusted basis, and long-term corporate bonds have averaged 2 percent. The capital markets generally offer higher potential rates of return on riskier investments such as equities. Figure 3.1 shows the annual returns of Standard & Poor’s (S&P) 500 Index, which is a measure of the performance of the stocks of 500 large companies traded on the U.S. stock exchange. Actual nominal (non-inflation-adjusted) returns for large company stocks varied widely from the annualized average return over long periods and have ranged from a low of minus 26.5 percent in 1974 to a high of 52.6 percent in 1954. As can be seen in figure 3.1, returns are variable. An average return over a long period of time can obscure the reality that equity returns fluctuate substantially from year to year. There have also been years in which equities have yielded negative returns. For instance, over the past 70 years or so, equity returns were negative in nearly 1 out of every 4 years. Even taking into account the variability of returns, some analysts have suggested that historic U.S. returns may overstate future returns. They state that the equity markets in the United States have tended to outperform the equity markets in other countries. Thus, when relying on historical data as the basis for estimates of long-term market growth, if one looks not just at U.S. data, but also at the historical returns of other countries, then the high historical returns to equities in the United States could be an exception rather than the rule. Historical returns are the only empirical basis with which to judge equity returns, but there is no guarantee that the future will mirror the averages of the past in the United States as opposed to some subperiod of the U.S. market or, alternatively, returns to foreign stock markets. In general, investors, tend to be averse to risk and demand a reward for engaging in risky investments. The reward is usually in the form of a risk premium—an expected rate of return higher than that available on alternative risk-free investments. For instance, the historical advantage enjoyed by equity returns over the returns of other assets is what is known as the equity premium. The premium is said to exist because equities have historically earned higher rates of return than those of Treasuries to compensate for the additional risk associated with investing in equities. However, the equity premium has slowly been declining. Studies have shown that the equity premium has declined since the 1950s. A number of studies have attempted to measure the equity premium as well as explain its size. One study found that the premium appeared to be quite high in the 1930s and 1940s and was caused by the perception of the high volatility in the stock market in the late 1920s and the early 1930s. This led investors to favor less risky securities as opposed to equities, generating a high equity premium. However, as the volatility of stock market declined after the 1929 stock market crash, the appeal of investing in equities began to increase; and although an equity premium continues to exist, it has steadily declined. However, in the 1970s the equity premium increased somewhat from its general downward trend; this was attributed to inflation. The study concluded that decreases in the equity premium were the result of increases in expected bond rates and decreases in the expected rates of returns to equities. It has also been suggested that the shrinking premium reflects a structural change in that the economy appears less susceptible to recessions. To the extent that corporate profits fluctuate with general economic conditions, fewer downturns translate into less volatility in corporate earnings. If investors perceive that the outlook for corporate earnings is more certain and that equities may be less risky than they have been historically, equity investing might carry a lower premium and, therefore, relatively lower returns. As a result, the equity premium diminishes. It is unclear whether the equity premium will continue to decline. However, if individual accounts affect equity prices in the short run, the equity premium could decrease. For instance, if the demand for equities increases as a result of individual accounts, the prices of equities are likely to increase. This in turn lowers the expected return on equities. As the expected return on equities decreases, the equity premium decreases because the difference between the return on equities and the risk-free asset such as Treasury bills would diminish. The decreasing equity premium could imply that people do not view the stock market to be as risky as they once did. One possible implication is that if people view the stock market as not very risky, and they prove to be right, they will continue to invest in it, and the equity premium is likely to continue decreasing. Alternatively, if the stock market is in fact riskier than investors believe, then investors will be surprised by underperformance and volatility over time and will begin to reduce their equity holdings, which could eventually cause the equity premium to go back to values consistent with past decades. The size of the equity premium has implications for analyzing the benefits of an individual account program. The potential gain from equity investing under an individual account program depends on what future equity returns are and in particular how much return might be expected for taking on additional risk. A significant part of the gain that might be generated from diversifying into equities comes from the equity premium. To the extent that the equity premium continues to decline, individuals are unlikely to receive as high a return from stock investing as they have in the past. The return that individuals are likely to receive from individual account investments would depend on what they are allowed to invest in, e.g. stocks, bonds, indexed mutual funds, as well as the risk of the asset being invested in. When estimating expected returns under an individual account program, most proposals have tended to focus on equities. However, other assets may offer different returns. Corporate equities have tended to have higher market returns than other investments because they are riskier. Other investments, such as corporate bonds, have also tended to offer high yields. For instance, corporate bonds offer higher yields than Treasuries to entice investors to buy these securities, which have some risk of default. As in the case of corporate equities, investors are offered a higher reward for taking on the additional risk that the company may default. If an individual account system were to provide for mutual funds, depending on the type of mutual fund allowed, individuals would receive various returns. For instance, a government bond mutual fund may yield a lower return to investors than an equity indexed mutual fund. Overall, the capital markets offer higher market returns only by having investors take on additional risk. Thus, in estimating expected returns for individual account investments, it is important to not only consider the type of asset invested in but also the riskiness of the investment. Higher returns are possible for individuals investing through individual accounts than under the current Social Security program, but only if individuals take on more risk. Individuals should therefore not only be interested in the returns of various assets but also in the risks that have to be incurred to achieve higher returns under an individual account program. The difficulty is how to measure risk and how to adjust rates of return for risk so that investors would be able to compare various returns to investments. Risk is often considered to be the uncertainty of future rates of return, which in turn are equated with variability. In fact, one of the underlying concepts of risk is inherent volatility or variability. For instance, the variability of equity prices is among the key factors that cause investors to consider the stock market risky. The price at which an individual purchases shares of a company early in the morning is not guaranteed even later in the day. Bond prices also vary due to changing interest rates and inflation. There are a number of different ways to try to measure variability or risk. All such measures give some estimate of the riskiness of investments. Classic risk measures such as variance or the standard deviation are often used to measure the risk of an asset. However these measures are often considered to be difficult for investors to understand and may not reflect how people perceive risk. For instance, investors do not generally take a symmetrical view of the variability of returns—downward deviations are perceived as economic risks, but upward deviations are regarded positively or as unexpected gains. Furthermore, quantifying uncertainty or risk is usually done using probability distributions. As long as the probability distribution falls symmetrically about the mean or average—what is known as a normal distribution—the variance and standard deviation are adequate measures of risk. However, to the extent that the probability distributions are asymmetrical, as is the case with the returns from a combination of securities, those measures are not as meaningful in terms of measuring risk. Other ways to measure risk include (1) the value at risk (VAR) --how much the value of a portfolio can decline with a given probability in a given time period, or (2) the beta of a security--the tendency of a security’s returns to respond to swings in the broad market. VAR is an approach used by money risk managers to measure the riskiness of their portfolios. It is an estimate of the maximum amount a firm could lose on a particular portfolio a certain percent of the time over a particular period of time. For example, if an investor wanted to put money into a mutual fund and wanted to know the value at risk for the investment of a given time period, the investor could determine the percentage or dollar amount that their investment could lose, e.g., a 2-percent probability that the investor could lose at least $50 of a $1,000 investment over a certain period of time. VAR models construct measures of risk using the volatility of risk factors, such as interest rates or stock indexes, which is helpful for mutual funds that have a wide variety of investments. Measuring the beta is another way to measure risk. In essence, if an investor wanted to know how sensitive a particular asset’s return is to market movements, calculating the beta would do so. Beta measures the amount that investors expect the equity price to change for each additional 1-percent change in the market. The lower the beta, the less susceptible the stock’s return is to market movements. The higher the beta, the more susceptible the stock’s return is to market movements. Thus, the beta would measure the risk that a particular stock contributes to an individual’s portfolio. As previously stated, estimating a return on investments without taking in to account the riskiness of the investment is likely to overstate the benefit of investing in that asset. Adjusting returns to account for risk is important because risk-adjusted returns are likely to be lower than unadjusted returns but more comparable across asset classes. There are different ways to adjust returns for risk, but there is no clear best way to do so. The appropriate risk-adjusted measurements depend on what is being evaluated. For instance, in terms of evaluating the returns of mutual funds, various risk-adjusted performance measures could be used.One measure used is the Sharpe Ratio, which basically measures the reward to volatility ratio and is the most commonly used measure for determining the risk-adjusted performance of mutual funds. A high Sharpe ratio means that a mutual fund delivers a high return for the level of volatility of the fund’s investments. Thus, if individuals were trying to determine the mutual fund that had the best combination of return for risk, they would choose the fund that had the highest Sharpe Ratio. An alternative to the Sharpe Ratio is the Modigliani Measure, which measures a fund’s performance relative to the market. The measure uses a broad- based market index, such as the S&P 500, as a benchmark for risk comparison. In essence, the measure is equivalent to the return a mutual fund would achieve if it had the same risk as a market index. Another measure is one calculated by Morningstar, Incorporated. Unlike the Sharpe Ratio, which compares the risk-adjusted performance of any two mutual funds, Morningstar measures the risk-adjusted performance of mutual funds within the same asset class. It usually assigns ratings to mutual funds on the basis of the risk-adjusted return and risk of a mutual fund. Thus, if individuals wanted to know how various mutual funds did within their asset groups, they would look at the Morningstar rating. There are other risk-adjusted measures that are used. However, there is no clear best way to adjust a return for risk, and there is no one risk- adjusted measure that everyone agrees is the correct measure. Many of the measures are complicated and may require more sophistication to understand than could be expected of individual account investors. It should be noted, however, that although risk-adjusted rates of return are the appropriate measure for individual account investments, an investor’s entire portfolio has a different risk than that of its individual components. Thus, risk-adjusted returns depend fundamentally on how portfolios are managed. Comparing rates of return on Social Security and private market investments has frequently been discussed in evaluating options for reforming Social Security, but comparing the two does not capture all the relevant costs and benefits that reform proposals imply. Such comparisons often do not factor in the costs of disability and survivors insurance when determining a rate of return on Social Security contributions for retirement. Individual accounts would generally increase the degree to which retirement benefits are funded in advance. Today’s pay-as-you-go Social Security program largely funds current benefits from current contributions, but those contributions also entitle workers to future benefits. The amount necessary to pay the benefits already accrued by current workers and current beneficiaries is roughly $9 trillion. Any changes that would create individual accounts would require revenues both to deposit in the new accounts for future benefits and to pay for existing benefit promises. Rate of return estimates for such a program should reflect all the contributions and benefits implied by the whole reform package, including the costs of making the transition. Administrative and annuity costs could also affect actual retirement incomes. SSA commented that we needed to clarify that comparisons between the rate of return implicit in the Social Security system and those of individual accounts were problematic for many reasons including the fact that Social Security provides survivors and disability insurance. We have further clarified issues regarding the rate of return comparisons and have referred to our forthcoming report that provides a more detailed discussion on comparing the rate of return implicit in the Social Security system with those of market investments. Under many of the individual account programs that have been proposed, individual accounts to varying extents would be managed by participants themselves. To operate fairly and efficiently, such a system would have to provide participants with information adequate for their decisionmaking as well as to protect against misinformation that could impair that process. Existing SEC disclosure and antifraud rules and related doctrines provide for the disclosure of information that is material to an investment decision. However, such disclosure alone would not enable participants in an individual account program to understand how best to use such information for purposes of their retirement investment decisions. To provide participants with a clear understanding of the purpose and structure of an individual account program, an enhanced educational program would be necessary. Such an enhanced and broad-based educational effort would have to be undertaken in order to provide individuals with information they need and can readily understand, as well as with tools that can help both improve the decisionmaking process and awareness of the consequences of those decisions. Individuals would need education on the benefits of saving in general, the relative risk-return characteristics of particular investments, and how different distribution options can affect their retirement income stream. If a wide variety of choice is offered individuals so that they could potentially choose less diversified investments, such as individual equities, a more broad-based educational program would be necessary. The wider the variety of choices, and thus more potential risks, offered individuals under an individual account program, especially a mandatory program, the more broad-based the education will need to be. If fewer, well-diversified choices are provided under an individual account program, the educational effort could be targeted more to the purpose for investing and the potential long-term consequences. It is also likely that some sort of provision, such as a default option--either a default to the defined benefit part of Social Security (staying in the current Social Security program) or to a mandatory allocation--may be needed for those individuals who, regardless of the education provided, will choose not to make investment choices. Existing disclosure rules require that material information be provided about a particular instrument and its issuer. Such disclosure would be essential to an individual account program, with some rules having more significance than others, depending on the investment choices offered. For example, if participants were allowed to acquire corporate securities such as stocks and bonds, the disclosure and reporting requirements of the Securities Acts of 1933 and 1934, such as those applicable to the governance, activities, and financial status of the issuer, would be particularly important to participants choosing such instruments. If investment choices were limited to mutual funds, disclosure about the funds would have primary importance, and information about the issuers of the securities owned by the funds would be relatively less significant for participants. In addition, the Employee Retirement Income Security Act of 1974 (ERISA) requires disclosures in connection with pension funds (covered by Title I of ERISA). If products offered by banks and insurance companies were permitted, special disclosure rules would apply. The Securities Acts of 1933 and 1934 generally require disclosure and reporting of detailed information about an issuer of securities, such as its management, activities, and financial status. The Securities Act of 1933 (1933 Act) primarily focuses upon the disclosure of information in connection with a distribution of securities; the Securities and Exchange Act of 1934 (1934 Act) concentrates upon the disclosure of information trading, transactions, and sales involving securities. The 1933 Act requires the disclosure of information intended to afford potential investors an adequate basis upon which to decide whether or not to purchase a new security and to prevent fraudulent conduct in connection with the offering. This disclosure generally takes place through a registration statement filed with SEC (and made available to the public, except for confidential information) and a related prospectus. Both documents contain detailed factual information about the issuer and the offering, including statements about the specifics of the offering as well as detailed information about the management, activities, and financial status of the issuer. The 1934 Act, among other things, contains extensive reporting and disclosure requirements for issuers of securities registered under the act. Issuers must file current, annual, and quarterly reports with SEC, and the annual report must be distributed to security holders. The 1934 Act also governs brokers, dealers, and others involved in selling or purchasing securities. The act contains a broad prohibition against fraud in connection with securities transactions that frequently has served as a basis for disclosing to customers an abundance of details about a particular instrument or transaction. ERISA and DOL regulations require the administrator of a plan covered by Title I of ERISA to file certain information about the plan with DOL and distribute it to plan participants and beneficiaries receiving benefits. One of the principal disclosure documents, the summary plan description (SPD), must include information specified in the regulations, which includes details about the structure, administration, and operation of the plan as well as the participant’s or beneficiary’s benefits and rights under the plan. The SPD must be written in a manner “calculated to be understood by the average plan participant” and must be “sufficiently comprehensive to apprise the plan’s participants and beneficiaries of their rights and obligations under the plan.” Moreover, in fulfilling these requirements the plan administrator is to take into account “such factors as the level of comprehension and education of typical participants in the plan and the complexity of the plan.” In addition to general reporting and disclosure requirements, DOL regulations contain special disclosure rules for participant-directed accounts. A participant-directed account plan is one that permits participants and beneficiaries to direct the investment of assets in their individual accounts. The special rules arise in the connection with the obligations of a fiduciary to a plan that permits such accounts. Under DOL regulations, a fiduciary can avoid liability for any loss arising from the participant’s exercise of control over account assets, provided that the participant has the opportunity to exercise control over the account assets and may choose, from a broad range of investment alternatives, the manner in which assets are invested. The regulations further provide that a participant has the opportunity to exercise control only if, among other things, the participant is provided or can obtain information sufficient for him or her to make informed investment decisions. This information includes (a) a description of investment alternatives and associated descriptions of the investment objective, risk and return characteristics of each such alternative; (b) information about designated investment managers; (c) an explanation of when and how to make investment instructions and any restrictions on when a participant can change investments; and (d) a statement of fees that may be charged to an account when a participant changes investment options or buys and sells investments. The information that the 1933 and 1934 Acts require issuers to disclose pertains to details about the issuers of securities and the securities themselves. Such information is significant to a person investing in a specific issuer. For the purchaser of shares in an investment company, such as a mutual fund, which is the vastly prevalent form of investment company, information about the company itself, rather than individual issuers, is most significant. Mutual funds are subject to the Investment Company Act of 1940, which deals with the registration, formation, and operation of investment companies, as well as provisions of the 1933 and 1934 Acts governing disclosure and prohibiting fraud. Disclosure about the fund, such as information concerning its investment strategies and its management, is provided in the registration statement filed with SEC; the prospectus or an alternative, less detailed document known as a “profile”; and periodic reports filed with the Commission and distributed to shareholders. The expansion of products offered by depository institutions (primarily federally insured banks and thrifts and their subsidiaries or affiliates) and insurance companies carries with it the potential for confusion about the nature and risk of investment products offered by such institutions. For example, bank sales of nondeposit instruments, such as mutual fund shares and variable annuities, could lead an investor to conclude that such instruments are federally insured bank products. Investment products sold by insurance companies, such as certain variable annuities and equity- indexed agreements, might be viewed as traditional insurance products, under which the insurer assumes the payment risk. If such products are securities, they are subject to the requirements of federal and state securities laws. The activities of institutions in connection with the products would be subject to regulation under the securities laws as well as regulation by their supervising agencies. The federal bank regulators have promulgated rules, guidelines, and policies containing standards for disclosure in connection with a banking institutions’ involvement in sales of nondeposit instruments such as securities. These regulators issued an Interagency Statement on Retail Sales of Non-Deposit Investment Products (“Interagency Statement”) together with subsequent statements that focuses on issues specifically pertaining to the retail sale of investment products to customers on depository institution premises. Among other things, the standards seek to prevent customer confusion over whether such products are FDIC-insured, primarily through disclosure and separation of sales of investment products from other banking activities. New products being offered by insurance companies can also confuse investors about whether such a product is insurance (the insurer accepts the repayment risk) or a security (the purchaser of the product faces some or all repayment risk). States typically regulate disclosure about insurance products by prohibiting unfair, deceptive, or misleading statements about a product. However, to the extent such instruments are securities, their purchase and sale are subject to federal and state securities laws. To address concerns about the effectiveness of disclosures regarding investing, particularly with respect to mutual funds, SEC and some states have established programs to provide for disclosing information to investors in a more understandable way. SEC’s “plain English” program is an example. The Commission instituted the program because much of the disclosure provided in prospectuses and other documents often is complex, legalistic, and too specialized for investors to understand. Under this program, the Commission revised its rule for the presentation of information in a prospectus to require that the prospectus comply with plain English writing principles listed in the regulation. SEC also amended its Form N-1A, the registration form used by mutual funds for registration, to provide for the use of plain English principles and simplified descriptions of information essential to an investor’s evaluation of the fund. In March 1998, SEC adopted a rule permitting mutual funds to offer investors a new disclosure called a profile. The document summarizes key information about the fund, including its investment strategies, risks, performances, and fees, in a concise, standardized format. A fund offering a profile can give investors a choice about the amount of information they wish to consider before making a decision about investing in the fund. Investors have the option of purchasing the fund’s shares on the basis of the profile, in which case they are to receive the fund’s prospectus along with the purchase confirmation. Among other things, the new SEC rules are designed to reduce the complexity of information provided to mutual fund customers and the potential for confusion that sometimes accompanies such information. They are an attempt to make the disclosure of material information more useful to those who invest in mutual fund securities. Whether an individual account program is mandatory or voluntary, giving millions of working Americans the responsibility for investing part of their Social Security payroll taxes on their own requires enhanced education. Social Security has provided a safety net for millions of people for a long time in that it has been the foundation of the nation’s retirement income system, providing income for millions of Americans. Introducing an individual account program would change the nature of the current Social Security program and would require increased education if people are to understand the individual account program and what may be required of them. Although education would be necessary regardless of whether the program was voluntary or mandatory, the government would have a special responsibility under a mandatory program to provide individuals with the basic investment knowledge that they would need in order to make informed investment decisions affecting their retirement. The extent to which enhanced education would be necessary would depend upon the available investment choices and the fees and expenses associated with an individual account program. An individual account program that offers many investment choices—especially one that is mandatory—would likely require a substantial amount of education because the wider the options provided an individual, the greater the chances are that the individual could lose money. If fewer well-diversified options are offered under an individual account program the fewer risk factors the individual has to consider and the more targeted the education could be. It would also be important to educate individuals about how to interpret the fees associated with individual account investments and how fees would affect their account balances. The Social Security program includes workers from all levels of income, those who currently invest in equity and bond markets and those who do not. It is unlikely that a “one size fits all” educational effort would be appropriate for an individual account program. Because a mandatory individual account program would require everyone to participate, including those who do not currently make investment decisions, educational efforts would be especially crucial and would need to reach all individuals. Large segments of the working population do not currently make investment decisions for various reasons. For instance, some people do not believe that they have enough money to save or at least to save in any vehicle other than a bank account. Others do not know the benefits of investing. Lastly, there are those who do not appear to understand the benefits of saving and investing or the necessity of doing so for retirement. Whatever the reason, millions of people have never made investment decisions. Investor education is especially important for individuals who are unfamiliar with making investment choices, including low-income and less well-educated individuals who may have limited investing experience.Thus, one of the primary areas of enhanced education under an individual account program would be to educate those who do not know the basics about savings or diversification, especially if the individual account program is mandatory. Those individuals and households who do not currently make investment decisions, but rely on Social Security as their primary source of retirement income, are likely to be the ones who are most affected by a mandatory individual account program and thus most in need of education. Congress and various agencies and organizations have instituted programs to educate people about the benefits of saving and investing. In the Savings Are Vital to Everyone’s Retirement Act of 1977, Congress mandated an education and outreach program to promote retirement income savings by the public. The act also required the Secretary of Labor, in consultation with other federal agencies selected by the President, to plan and conduct a National Summit on Retirement Savings. As part of this mandate, the act required the Secretary to bring together retirement and investment professionals, Members of Congress, state and local officials, and others to discuss how to educate the public--employers and individuals--about the importance of saving and about the tools available to enable individuals to retire and remain financially independent. Pursuant to this mandate, DOL sponsored the National Summit in 1998. Other efforts have been made to reach out to investors to educate them about both how to protect themselves against fraud. SEC has realized that an important part of its role in combating fraud is to educate the public about what to be aware of and how to avoid being taken advantage of. If investors are adequately informed about the risks associated with potential securities frauds, then they will be less likely to fall victim to scams. SEC has implemented several programs to advise the investing public about potential frauds. For instance, SEC has issued numerous pamphlets about what types of questions investors should ask about investing and the people who sell those products. Additionally, SEC has held local “town meetings” across the United States to discuss investment risks. It also coordinates the “Facts on Savings and Investing Campaign” with federal, state, and international securities regulators. SEC officials said that in order to have a successful education program, it is necessary to determine what people do and do not know. This has entailed determining people’s level of literacy and math knowledge in order to design a program that could provide education for individuals with various levels of investment knowledge. DOL’s Pension Welfare and Benefits Administration has several educational outreach efforts for encouraging employers to establish retirement programs and employees to save for retirement. The basic program is a joint effort with a wide range of private sector partners, including the American Savings Education Council, the Employee Benefit Research Institute, banks, insurance companies, consumer groups, retiree groups, participant rights’ groups, mutual funds, and other large companies. This joint effort was designed to provide very basic information to individuals and employers about the different types of savings vehicles available under the law and to encourage the private sector to provide employees with models of pension programs. The educational program tries to target special groups whose pension coverage is low, including such groups as women and minorities as well as small businesses; only about one-fifth of small businesses offer pension plans to their employees. DOL has issued numerous pamphlets on what individuals should know about their pension rights and what businesses can do to start pension plans for their employees. For instance, they regularly use the Small Business Administration’s newsletters to encourage members to establish pension plans and have developed a Web site for small businesses to give them information on various pension plan options, depending on how much each business can afford to contribute to a pension fund. These current programs have a limited ability to reach the overall population. One clear constraint is the low level of resources, including funding directed to investor education. Another limitation is that they are targeted to circumscribed audiences, such as companies that do not have retirement programs as opposed to individuals who do not invest. Furthermore, most efforts are reaching those individuals who choose to take it upon themselves to find out what they need to do to save more or to learn how to make better investment decisions. Thus, even as a result of the various targeted efforts undertaken, large segments of the population are still not being reached. Numerous studies have been done that have looked at how well individuals who are currently investing understand investments and the markets. On the basis of those studies, it is clear that among those who save through their company’s retirement programs or on their own, there are large percentages of the investing population who do not fully understand what they are doing. For instance, one study found that a little more than a third of American workers have tried to calculate how much money they would need to retire comfortably. Another study found that 47 percent of 401 (k) plan participants believe that stocks are components of a money market fund, and 55 percent of those surveyed thought that they could not lose money in government bond funds. Another study on the financial literacy of mutual fund investors found that less than half of all investors correctly understood the purpose of diversification. Further, SEC reported that over half of all Americans do not know the difference between a stock and a bond, and only 16 percent say they have a clear understanding of what an IRA is. Although individuals who currently make investment decisions are likely to have some familiarity with investing, education would also be important for them because of their increased responsibility under an individual account program. Furthermore, according to the studies cited above, there would be a real need for enhanced education about such topics as investing, risk and return, and diversification. As the Chairman of SEC has said, there is a wide gap between financial knowledge and financial responsibilities. Closing that knowledge gap is imperative under an individual account program. Moving to an individual account program is going to require a thorough education effort for everyone to understand the program and how it is different from the current Social Security program. The government has much more responsibility for educating individuals under a mandatory program because people would effectively be forced by the government to save and to make decisions about what to do with that saving as well as bear the consequences of a decision. Even with a default option for those who do not choose to participate, the government needs to explain why the option was provided and what are its implications. Many people do not understand the current Social Security program, how their contributions are measured, and how their benefits are computed, even though the program is over 60 years old. Yet, millions of individuals rely on the program as their sole source of retirement income. In order to increase people’s understanding of Social Security, SSA has implemented various efforts to educate people. Such efforts have included providing a 1-800 number for recipients to ask questions, having a public education service campaign, and providing educational packages to individuals. Despite these efforts, SSA officials said that people still have a hard time understanding the program. Implementing an individual account program is likely to require enhanced education not only about individual accounts but also about how an individual account program would change the nature of Social Security and what that means for the individual. At a minimum, under an individual account program, educational efforts would be needed to help people understand how individual accounts would work and how the accounts would affect their retirement income security. Many proposals do not specify what entity would be responsible for the public education program that would be needed for an individual account program. On the basis of the type of information experts in employee education say is needed, education about an individual account program could include the following information: Goals of the program — individuals need to know what the goals of the program are and why they are participating. Responsibilities — individuals need to know what their responsibilities are under the program. Retirement Income — individuals need to know what their retirement income needs are and how their retirement needs will be affected under an individual account program. Materials — individuals need materials that convey the message of the program and what will be required of them. The amount of education that would be necessary under an individual account program depends on the range and type of investment choices offered to individuals. There are basic issues that individuals will need to be educated about regardless of how the program is structured. Such issues include (1) the choices they have to make; (2) the consequences of those choices; (3) what the investment options are, such as stocks, bonds, and indexed mutual funds; (4) rates of return of different investment vehicles; and (5) the risks of investment vehicles. However, as a wider variety of choice is offered to individuals, more education beyond the basics would be necessary because broader issues would need to be considered. With more variety of choice, investors would need to choose among various assets, which requires the investor to have certain skills to evaluate the risks and his or her own preference for risks. If the structure allows for an even broader variety of choices such as real estate, the educational requirements would mount. When choices are limited to a few well-diversified choices (such as a few indexed mutual funds), many decisions are made by those managing the funds or by rules governing the fund (such as what an indexed mutual fund can invest in). If the investor has the option of frequently moving funds from one investment to another, the educational effort needs to include analytical tools to aid such decisions and advice about the importance of a long-term horizon. Thus, the fewer well-diversified choices offered, the less risk to the individual and the more targeted the education could be. A variety of choices may benefit people in that it offers them a wider selection from which to choose, allowing them to choose the option that is in line with their preferences. However, it also increases their risk in that they could potentially choose less diversified investments, such as individual equities, that could result in financial loss. Furthermore, the wider the variety of choice offered, the greater the need for people to consider other issues. For instance, because offering a wide variety of investment options is likely to promote competition among financial institutions to provide a range of investment vehicles, investors would need to be educated about fraud and how to avoid it. When Great Britain went to an individual account program, individuals purchased unsuitable investments because of high-pressured sales tactics that resulted in individuals losing billions of dollars. The Chairman of the SEC has stated that allowing a broad range of investment options under individual accounts provides opportunities for fraud and sales practice abuses. Thus, education about fraud becomes important. For example, an investor would need to know what to look for, what type of questions to ask, what type of advice is biased, what the investor’s rights are, or what the law requires. When investment options are limited, the chances of fraud are reduced. Moreover, the wider the variety of choice that is offered individuals, the more they will need education about understanding the value of diversification and the possible consequences of not having a diversified portfolio. If choices are limited to indexed mutual funds, less education about diversification would be needed because indexed funds are by nature diversified. Education is also necessary for understanding risks and the various returns that are likely with different investment options. With a wider variety of investment options, understanding risk and being able to manage the risk become important. It is important to explain to people that historical returns may not always be good predictors of future returns, especially when risks are ignored. As stated in chapter 3, measuring risk and comparing risk-adjusted returns can be a difficult process. Furthermore, being able to understand the rates of returns of various options and pick the appropriate investment vehicles become more difficult, as more variety is offered. Individuals would need more expertise to understand differences in the rates of return of equities, bonds, equity mutual funds, indexed funds, and so on. If the program has fewer well-diversified choices, limits would be placed on the ways that people could lose money. The educational effort could, therefore, focus more on getting individuals to be informed participants in the program. Educational issues that become relevant when individuals are offered numerous options are of less concern when they are offered fewer, well-diversified options. With fewer, well-diversified investment choices, the educational effort could be more targeted to the purpose of retirement savings, e.g., educating people about how much they would need to save and invest for retirement or determining their goals for retirement. Other issues, such as compounding—the calculation of interest earned on a daily, quarterly, semiannual, or annual basis—or the impact of inflation on returns are issues that individuals need to fundamentally understand. For example, with compounding interest individuals could earn interest on the money they save and on the interest that the money earns, e.g., if they invested $1,000 at 3-percent interest they could double their money in 24 years, but at 4 percent interest they could double it in 18 years. With inflation, or rising prices, the money that individuals earn on their investments would potentially be worth less and less as prices rose. In addition, seemingly small annual fees can eat away at the accumulated value. Offering fewer, more well-diversified options enables the education effort to be targeted on basic issues that would be helpful for individuals to understand in order to save for retirement. Despite current efforts to increase people’s awareness to save more, many people are still not saving and making the retirement choices they need to make, effectively relying on Social Security to be their primary source of retirement income. It is unlikely that moving to individual accounts will result in active participation by all individuals. Thus, various officials have suggested that a default option be provided for those individuals who, regardless of educational effort, will not make investment choices. Default options could include a default to the defined benefit portion of Social Security (staying in the current Social Security program) or to some type of mandatory allocation. One example would be an investment vehicle in which, depending on the age of the individual, certain portions of the investment could be in equities and certain portions in bonds. The portion in bonds would increase with the age of the individual. Alternatively, the default option could be invested totally in Treasuries. As with any option, a default option with less risk is also likely to provide lower returns. | Pursuant to a congressional request, GAO provided information on the issues associated with individual social security accounts, focusing on how such accounts could affect: (1) private capital and annuities markets as well as national savings; (2) potential returns and risks to individuals; and (3) the disclosure and educational efforts needed to inform the public about such a program. GAO noted that: (1) individual investment accounts could affect the capital markets in several ways; (2) as a source of funds for the accounts, most proposals use either the cash collected from social security taxes or federal general revenues; (3) as a result, the primary capital market effect is a purely financial one: borrowing in the Treasury debt market to provide funding for investment in private debt and equity markets; (4) although the annual flows are likely to be sizeable, both the private debt and equity markets should be able to absorb the inflow without significant long-term disruption; (5) there could eventually be a significant increase in the amount of new funds flowing into the annuities market; (6) however, the magnitude of annuity purchases is likely to build gradually over time as more retirees build larger balances, allowing the market sufficient time to adjust; (7) individual account proposals could also affect the level of financial resources available for private investment by increasing or decreasing national savings; (8) the extent to which individual accounts affect national savings will depend on how they are financed, the structure of the program, and any behavioral responses of businesses and individuals; (9) national savings is more likely to increase if: (a) the government funds would have been spent but instead are not; (b) the program is mandatory and prohibits pre-retirement distributions; and (c) households do not fully adjust their retirement saving; (10) to the extent that households use the opportunities offered by an individual account program to invest in private equities and debt rather than Treasury securities, they could increase both the returns they receive and the risks they face compared to the Social Security program; (11) although asset diversification offers mitigation against certain risks, the returns that individuals receive would depend on and vary with their investment choices and the performance of the private debt and equity markets; (12) most advocates of individual accounts state that the expected future returns on private investments would be much higher for individuals than the implicit return available under the Social Security program; (13) some argue that historical returns may not be a good predictor of future returns; and (14) to provide participants with a clear understanding of the purpose and structure of an individual account program, an enhanced educational program would be necessary. |
The National Defense Authorization Act for Fiscal Year 2013 established the National Commission on the Structure of the Air Force. The act required the commission to undertake a comprehensive study of the structure of the Air Force to determine whether, and how, the structure should be modified to best fulfill current and anticipated mission requirements for the Air Force in a manner consistent with available resources. The commission was to give particular consideration to evaluating a structure that achieved certain things, including an appropriate balance between the active and reserve component of the Air Force. In January 2014, the commission submitted its report to the President and the House and Senate Armed Services Committees with 42 recommendations that varied in size, scope, and complexity. For example, one relatively straightforward recommendation was to discontinue the use of non-disclosure agreements in the corporate (budget) process. In contrast, one large and complex recommendation was to integrate the headquarters staffs of the components. In addition, the commission’s recommendations were addressed to different entities—the President (one recommendation), Congress (five recommendations), Secretary of Defense (four recommendations), and Secretary of the Air Force (32 recommendations). For example, one recommendation was for Congress to allow the closing or “warm basing” of some installations. Section 1055 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 requires that the Secretary of the Air Force submit to the congressional defense committees a yearly report on the Air Force response to the commission’s recommendations. In February 2015, the Air Force submitted its first response to the commission’s recommendations; and, submitted its second response in February 2016. In January 2013, the Air Force created the Total Force Task Force to identify options for integrating the active and reserve component to meet current and future Air Force requirements. The Task Force transitioned into the Total Force Continuum office within the Headquarters Air Force Strategic Plans and Programs Directorate. The Total Force Continuum office is led by general officers from each component, who, according to Air Force officials, manage and oversee the process for implementing the commission’s recommendations. Further, officials within the Total Force Continuum office work with other Air Force organizations to implement the commission’s recommendations. For example, according to Air Force documentation on the status of the commission’s recommendation, the team working to develop a cost model for calculating military personnel costs included representatives from the office of the Assistant Secretary of the Air Force for Financial Management and Comptroller; Air Force Reserve Command; and the office of the Director, Air Force Studies, Analyses, and Assessments. On July 3, 2014, the Air Force established the Executive Committee, which is chaired by the Air Force Assistant Vice Chief of Staff, and includes senior leaders from Air Force headquarters offices and the reserve component. According to Air Force officials from the Total Force Continuum office, the Executive Committee tracks and acts on the commission’s recommendations, and provides regular updates to the Secretary of the Air Force and Chief of Staff of the Air Force. The Total Force Continuum office established a team for each commission recommendation comprised of representatives from each component. According to Air Force officials, each team is led by a Colonel, Lieutenant Colonel, or civilian equivalent and is responsible for reviewing the recommendation and proposing implementation actions. The teams present their proposed actions to the one-star general officers within the Total Force Continuum office. According to Air Force officials, upon the general officers’ approval, the team leader and general officers brief the Executive Committee. The Executive Committee may approve the proposed action or may direct the team to further study or modify the proposed actions. Semi-annually, the Executive Committee briefs the Secretary of the Air Force and Chief of Staff of the Air Force on the proposed implementation actions. According to Air Force officials, the Secretary and Chief of Staff may approve implementation of the proposed action, close the recommendation, or direct the team to do more work. Figure 1 illustrates this process. In May 2015, the Air Force issued its Strategic Master Plan, which is intended, in part, to align activities across the Air Force in the areas of human capital, strategic posture, capabilities, and science and technology. The Strategic Master Plan contains annexes for each of these areas, which are intended to translate the Strategic Master Plan’s goals and objectives into tangible actions and priorities. The Strategic Master Plan has an overall goal for considering all components as a Total Force. The Human Capital Annex expands upon the concept of a total force and specifies goals, objectives, and timeframes for recruiting and training, retention, component integration, and other topics. Twenty-one of the commission’s recommendations are aligned with four of the six objectives in the Human Capital Annex, as part of the revised approach and as illustrated in appendix III to this report. For example, the Annex’s goals for career progression are similar to the commission’s recommendations on promotions, continuum of service, and multiple career track options. Also, the Annex’s goals for increasing integration among the components are similar to the commission’s recommendations for integrating headquarters’ staffs, integrating personnel systems, and making transition between components easier. As of February 2016, two years after the commission issued its report, the Air Force closed six recommendations—five were implemented and the Air Force did not agree with the sixth. Thus, the Air Force still has to implement 36 of the 42 recommendations. The Air Force has only developed partial implementation plans for three of these 36 recommendations, which remain open. Air Force officials from the Total Force Continuum office explained that the Air Force experienced challenges internal and external to the service that affected implementation of the commission’s recommendations. According to Air Force officials, internal challenges included: expected completion dates not identified; difficulties in coordinating implementation efforts across offices; and inter-related recommendations. First, neither the commission nor the Air Force established time frames for completing implementation of all the commission’s recommendations. The lack of an overall implementation time frame, or time frames for most individual recommendations, may have conveyed the idea that implementation was open-ended, since there was generally no clear end-date to work toward, according to Total Force Continuum Office officials. Second, the Air Force experienced challenges in coordinating implementation efforts across components and directorates. According to Air Force officials, extensive coordination and cooperation across the Air Force components and Air Force headquarters’ directorates is needed to implement many commission recommendations. This coordination did not consistently occur because, according to Air Force officials, the level of the team leaders who manage each recommendation—typically Colonels or Lieutenant Colonels—lacked the authority to task personnel outside their office which meant that cooperation and coordination were generally dependent on persuasion rather than direction. According to Air Force officials, although the Total Force Continuum office manages the process to implement the commission’s recommendations, the team leaders and members generally belong to other organizations. For example, the team working on the recommendation to develop an integrated pay and personnel system includes representatives from the office of the Deputy Chief of Staff for Manpower, Personnel and Services (team leader), Air Force Personnel Center, Air Force Reserves, National Guard Bureau, and the office of the Assistant Secretary of the Air Force for Financial Management and Comptroller. In addition, implementation team roles have been an additional duty, rather than the sole duty for these team leaders and members. Finally, successful implementation of some recommendations depends on implementation of related recommendations according to the February 2015 Report on Recommendations of the National Commission on the Structure of the Air Force and Air Force officials. For example, a pilot program to integrate active and reserve component forces into an “Integrated Wing” needs to be established and tested before 6 other recommendations can be fully implemented, according to Air Force officials. The results of this pilot program will inform decisions on how to implement a range of recommendations such as policies and procedures concerning filling key deputy positions, and considerations for personnel awards, decorations and promotions. In addition to these internal challenges, Air Force officials explained that they also encountered external challenges. Air Force officials stated that while they can take actions to make some progress in implementing many of the commission’s recommendations, full implementation of some recommendations depends on support from external agencies or Congress. For example, one recommendation is for the President to direct the Departments of Defense and Homeland Security to develop national requirements for Homeland Security and Disaster Assistance. Also, Air Force officials have identified a number of legislative actions necessary to fully implement seven commission recommendations. For example, they said that legislative action is necessary to fully implement one recommendation to develop a pilot project for “continuum of service”, that is, the ability for personnel to transition more seamlessly among the components. To fully implement another recommendation related to instructor pilots, the Air Force identified legislative action as necessary to permit reserve personnel and dual status military technicians to train active duty pilots as a primary duty. The Air Force is revising its approach to manage and oversee implementation of the remaining 36 commission recommendations, and it expected to fully initiate this approach in March 2016. According to Air Force officials, the Air Force Assistant Vice Chief of Staff directed the use of a new approach that strategically groups related recommendations to facilitate management, oversight, and coordination. To do this, the Air Force aligned the commission’s recommendations with objectives in the Air Force Human Capital Annex of its Strategic Master Plan. According to Air Force officials, categorizing the recommendations under the objectives in the Human Capital Annex, will help the Air Force synchronize related efforts and thereby minimize the potential for overlapping efforts. The Air Force also designated a General Officer and/or civilian equivalent at the Senior Executive Service level to manage and oversee implementation of recommendations in each group. The General Officer or civilian equivalent at the Senior Executive Service level will periodically report the progress of his or her group of recommendations to the governance structure for the Human Capital Annex and the Executive Committee. Recommendations that did not align with any of the objectives were grouped under the Total Force Continuum office. Half (21) of the commission’s recommendations fall within the Total Force Continuum group, including recommendations related to increasing the number of reserve component instructor pilots, fielding equipment concurrently among active and reserve component, and identifying homeland security and disaster assistance requirements. The Total Force Continuum office will periodically report the progress on the recommendations to the Executive Committee, according to Air Force officials. According to Air Force officials, the intent of this new structure is to provide the needed level of urgency, oversight to ensure accountability, and authority to provide direction across directorates and components. In addition, the Air Force intends to use implementation templates for each group of recommendations as well as each individual recommendation. The templates’ instructions explain that the structure is adapted from the structure for implementation-type plans discussed in an Air Force instruction. Upon reviewing the templates, we noted that they contain a requirement for milestones and are to specify the tasks that need to be completed to implement each recommendation. According to Executive Committee minutes and Air Force officials, recommendation milestones will be driven by, and aligned with, the Human Capital annex objectives’ milestones. Under the revised approach, recommendation implementation remains an additional duty for the team leaders and members. Table 1 summarizes the Air Force’s approach to implementing the commission’s recommendations before and after the Air Force began to develop its revised approach. The Air Force’s revised approach requires that tasks and milestones be developed to manage implementation of the commission’s recommendations, which is consistent with leading practices on program management and our prior work on business process reengineering. However, the revised approach does not require performance measures. Our prior work has demonstrated that using performance measures facilitates assessment of how goals are being achieved and can also identify areas for improvement, if needed. Air Force officials explained that their revised approach requires aligning actions on related recommendations and identifying tasks and milestones for each recommendation. Leading practices for program management advocate grouping related projects in a coordinated way to maximize benefits, track actual against planned milestones, and to identify interrelationships among projects, as well as to monitor project performance to identify any needed modifications. In addition, our prior work and leading practices have shown that an implementation plan, consisting of tasks, milestones, and performance measures that contain key attributes can help organizations gauge progress toward achieving their desired results and can help leaders identify when corrective actions are needed. We determined that the Air Force has made progress incorporating leading practices for milestones and tasks, but has not incorporated—and did not yet have plans to require—performance measures that include key attributes. To understand the extent to which the Air Force had previously incorporated leading practices, we reviewed implementation plans for 3 of the 36 recommendations that have not yet been implemented. These plans were developed before the Air Force adopted its revised management approach. We evaluated these three plans because, at the time we conducted our analysis, the Air Force had not developed any plans under its revised approach. Based on our analysis of these Air Force plans, we found that the Air Force had developed strategic milestones and partially identified tasks to achieve objectives in the 3 implementation plans it developed before adopting its revised management and oversight approach. In the implementation plans for the 3 recommendations we reviewed, the plans were not presented in a single document and the documents did not always identify milestones for all interim tasks. Our prior work suggests that, ideally, objectives and measures should be described in a single document, such as an implementation plan, that defines how results can be measured. According to our Business Process Reengineering Assessment Guide, agencies undergoing business transformations should develop a detailed implementation plan that lays out what needs to be done to achieve implementation of new processes by identifying milestones and specifying timetables for all actions so that progress can be closely monitored. We have also reported that developing and using specific milestones to guide and gauge progress toward achieving an agency’s desired results informs management of the rate of progress toward achieving goals and whether adjustments need to be made to maintain progress within given time frames. Also, our prior work on performance measurement and federal internal controls discusses using performance measures to assess performance over time. In 2012, we reported that federal agencies engaging in large projects can use performance measures to determine how well they are achieving their goals and to identify areas for improvement, if needed. We have found that by developing and tracking performance against a baseline for all measures, agencies can better evaluate progress and determine whether or not goals are being achieved. Through our prior work on performance measurement, we have identified key attributes of performance measures that can help managers monitor progress toward achieving program goals and priorities. Previous GAO work also indicates that agencies successful in measuring performance had performance measures that demonstrate results, are limited in number, cover multiple priorities, and provide useful information for decision making. Table 2 below summarizes selected attributes of performance measures and lists potential adverse consequences if attributes are missing. We analyzed the three implementation plans developed under the Air Force’s original approach, to assess the extent to which the performance measures contained in the plans contained the key attributes discussed above. We found that the plans’ performance measures contained one or more of the seven relevant key attributes, but they did not incorporate all seven attributes. For example, the performance measures did not consistently include clear, measurable, objective measures and a baseline assessment of current performance. Moreover, because the commission’s recommendations were addressed individually and not grouped together at the time the measures were developed, we could not easily determine whether measures for related recommendations collectively addressed the Air Force’s priorities in a balanced manner. The revised management approach, discussed in the Executive Committee’s December 2015 meeting, includes tasks and milestones, but not performance measures. According to Air Force officials, the milestones for each group are required to be developed according to a standardized template that the Total Force Continuum office developed. The implementation plan template contains an overall plan for the group of recommendations and annexes detailing plans for each recommendation within the group. According to Air Force officials, the information in the plan is to include: a discussion of the commission’s recommendation including relationships to other commission recommendations; strategic milestones and tasks with completion dates, assumptions, and constraints; and a project schedule with tasks, offices of primary responsibility, number of days for task completion and implementation dates. However, the template does not provide for performance measures. The milestones are due at the March 2016 Executive Committee meeting. According to Air Force officials, the milestones and the proposed template apply the basic principles contained in Air Force guidance on business case analysis procedures, which include linking tasks to specific, achievable milestones. According to Air Force officials, as of December 2015, the Air Force requires the use of project management charts that include program objectives and milestones specifying tasks that will have to be completed to implement each recommendation. Under the revised approach, project schedules will link milestones and tasks with responsible offices and identify due dates for completion of both strategic milestones and tactical tasks in a single document. According to Air Force officials, each month, the Executive Committee will review each group’s status and progress in achieving their tasks and milestones. In addition, at each monthly meeting, the Executive Committee will review in detail all the recommendations within one of the groups. Therefore, the Executive Committee will review each individual recommendation and any associated issues, tasks, and milestones at least twice a year. Because the new, revised approach and the implementation plan template do not require development of performance measures, the Air Force will continue to lack critical information to oversee implementation of the commission’s recommendations and will not have full visibility to track progress. Moreover, by not requiring performance measures that contain key attributes in its implementation plans, the Air Force may be missing critical information that can be used to identify areas needing attention. The Air Force developed a process to identify potential active and reserve component force mix options intended to better address and balance risks, costs, and sustainability. These options are presented to Air Force senior leaders for their consideration, and the leaders’ decisions inform the first phase of the multi-year budget development or Planning, Programming, Budgeting, and Execution process. Noting that the active and reserve component train to the same standards, the commission’s report discussed the advantages of shifting positions from the active to the reserve component, including saving money that could be used for readiness and investment. The Air Force developed its force mix option process to evaluate the mix of active and reserve component forces across the Air Force. As part of this process, the Air Force developed customized, classified data analyses for 44 aircraft types and mission areas. For example, the Air Force analyzed: bombers; the various aircraft used for the personnel recovery mission; and civil engineering, logistics, and medical services forces. The Air Force completed its initial analysis of all primary mission areas in December 2015, but it plans to periodically re-evaluate each analysis, because inputs to the analysis—such as requirements, cost data, and assumptions—change periodically. The Total Force Continuum office within the Headquarters Air Force Directorate of Strategic Plans manages the force mix option process. The process combines data analysis and stakeholder inputs. According to officials from the Total Force Continuum office, the data analysis portion of the process uses authoritative data from established sources across the Air Force. For example major commands such as Air Combat Command provide flying hours data and the office of the Assistant Secretary of the Air Force for Financial Management and Comptroller provides cost data. Relevant stakeholders from across the Air Force (including Air Force Reserves, Air National Guard, various Air Force Headquarters directorates, and major commands), also have the opportunity to provide their comments, ideas, and suggestions at several points throughout the process. Using the data analysis, stakeholders develop and refine force mix options. The key components of developing force mix options— assumptions, data analysis, and stakeholder input— are discussed below. Assumptions: According to officials who manage the process, the data analysis contains key assumptions concerning demand, readiness, and operational tempo. Demand is derived from Department of Defense (DOD)-approved planning scenarios. In building this analysis, the Air Force focuses on the surge period to determine whether or not it has enough forces, but it focuses on the “post-surge” period (i.e., the period of time after cessation of major combat operations when there is a continuous demand for forces to rotate in and out of the area for several years) to determine its appropriate mix of its forces. The Air Force assumes that all units are ready and available to perform their missions and also assumes that it will be able to comply with operational deployment guidance. The DOD goal for operational deployment to dwell ratio for active component forces is 1:2 or greater, meaning one deployment period would be followed by a non-deployed period that is at least twice as long. For the reserve component, the Air Force assumes it can meet a 1:5 mobilization to dwell ratio, meaning that one mobilization period would be followed by a period five times as long when the unit is not mobilized. Since the data in the workbooks is inter-connected, any change in the assumptions could affect the output. For example, if the reserve component’s mobilization to dwell ratio changes (e.g., from 1:5 to 1:10), then more active component forces may be needed to meet the demand. Also, if units are less than fully ready, then any gap between the supply of forces and the scenario’s rotational demand may increase, thereby increasing risk, according to Air Force officials. These assumptions are explained to senior leaders before the analysis results are presented. Data analysis: For each aircraft type and mission area, the Air Force builds a classified, customized Excel workbook that contains a series of interconnected spreadsheets that contain hundreds of cells. Although each workbook is customized for each aircraft or mission area, they generally contain the same or similar types of data including: unit manpower by component; flying hours and cost per flying hour; direct and indirect personnel costs; and information on the current number of units; comparisons of unit supply and demand. According to officials who manage the process, much of the data entered into the Excel workbooks comes from standard data sources. For example, flying hours data come from the approved training program for each aircraft type, and manpower data comes from unit manning documents. Stakeholders review the data content and analysis and provide inputs and corrections. However, there are limitations on how the information in the workbooks should be used, according to Air Force officials. For example, workbook information on location is generic and should not be used for basing decisions. Stakeholder input: Stakeholders use the data analysis to develop force mix options. The Excel workbooks and spreadsheets contain the data analysis but do not automatically generate force mix options or predict combat effectiveness. Instead, Air Force personnel use the data analysis in the workbooks to identify and assess the options—(i.e., to see how a change in force mix may affect the capacity to meet the combatant commander’s continuing demand for forces rotating into and out of an area after cessation of major combat operations). Stakeholders assess the advantages and disadvantages of various options including any capacity gaps that could result and relative differences in costs. When the analyses are presented to decision makers, the decision makers may direct additional analysis or assess additional options. Figure 2 illustrates the Air Force’s force mix option process based on knowledgeable Air Force officials’ descriptions. During the process, a number of force mix options are presented to Air Force senior leaders at decision meetings. The information presented to the senior leaders includes an explanation of key assumptions and a summary chart that contains force mix options, the proposed option, and views of stakeholders including the major commands, Air Force Reserves, and Air National Guard. Figure 3 below is a notional example of an output summary chart that is presented to senior leaders. From left to right, the figure shows: current forces and forces for force mix options. For each pair of bars, the first bar represents the capacity to rotate forces for normal operations. The second bar represents the capacity of active and reserve component forces to meet the combatant commander’s continuing demand for force rotations following major combat operations (labeled as “post-surge”). The figure also illustrates the difference, if any, between the supply of forces and the “post-surge” demand. Finally, the circle at the top of each pair of bars represents the relative cost of the option. According to officials who manage the process, stakeholders’ views are presented and discussed at senior leader decision meetings, and stakeholders can raise any issue of importance to them including viability and risk. For example, stakeholders have raised issues such as difficulties recruiting reserve component forces in remote locations. These officials explained that the force mix option process identifies inputs into the planning phase of the budget development process. For example, they said, the force mix option process supported consideration of moving some strategic airlift into the reserve component as the budget for fiscal year 2017 was developed and also informed a decision to increase the reserve component positions for civil engineering. However, it is not clear at this time how many of the proposed force mix options from all the analyses completed by December 2015 may ultimately be implemented since the budget development and execution cycle spans up to four years. Therefore, it is not clear at this time how much the overall mix of active and reserve component forces may change over time. The Air Force has made progress by recently taking actions to improve the management and oversight of its implementation of the commission’s recommendations. These actions are aimed toward addressing the challenges the Air Force has experienced in implementing the recommendations such as coordination across offices, linking efforts on related recommendations, and setting deadlines for completing implementation. The Air Force’s revised approach includes aspects of leading program management and performance measurement practices—by using a template that requires the identification of tasks and milestones. However, the revised approach is new and unproven and does not require that performance measures be developed for each commission recommendation in order to assess progress and effects. While the three implementation plans developed under the original approach included performance measures, our analysis found that those measures lacked key attributes of performance measures. Without complete implementation plans that include performance measures, Air Force leaders may lack key information they could use to monitor progress and assess whether performance is meeting expectations for the 36 recommendations that are still open. To facilitate implementation of the commission’s recommendations and provide managers with information to gauge progress and identify areas that may need attention, we recommend that the Secretary of the Air Force in coordination with the Chief of Staff of the Air Force direct the Assistant Vice Chief of Staff of the Air Force to develop complete implementation plans that include performance measures for all 36 commission recommendations that remain open. We provided a draft of this report to the Department of Defense (DOD) for review and comment. DOD’s comments, provided by the Air Force, are reproduced in appendix IV. The Air Force also provided technical comments which we have incorporated as appropriate. The Air Force agreed with our recommendation to develop complete implementation plans that include performance measures for the 36 remaining open National Commission on the Structure of the Air Force (commission) recommendations. In its comments, the Air Force agreed that performance measures with the key attributes described in our report could provide valuable information useful in tracking the Air Force’s progress on the recommendations and identifying needed corrective actions. The Air Force estimated a completion date of March 2017 for developing performance measures. We are sending copies of this report to the appropriate congressional committees and the Secretary of the Air Force. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Status (as of February 2016) Open cost” approach for calculating military personnel costs. 2. Budgeting Flexibility: Congress should allow DOD increased flexibility in applying budget cuts across budget categories, including installations. 3. Resourcing the Reserve Components: To ensure the Air Force leverages full capacity of all components of the force, the Air Force should plan, program, and budget for increased reliance on the reserve components. Infrastructure: The Air Force should consider, and Congress should allow, the closing or warm basing of some installations. 5. Air Force Reserve Command: Congress should disestablish the Air Force Reserve 6. Staff Integration: The Air Force should integrate the existing staffs of all components. 7. Air Force Reserve (AFR) Unit Integration: The Chief of Staff of the Air Force should direct the integration of AFR units into corresponding active component organizations. 8. Full-Time and Part-Time Mix: The combination of full-time and part-time positions should be 9. Air National Guard (ANG) Unit Integration: The Chief of Staff of the Air Force should direct the integration of Air Force units into corresponding ANG organizations. 10. ANG Unit Size: The Chief of Staff of the Air Force, in coordination with the Director of the ANG, should change wing-level organizations to group organizations. 11. Concurrent Fielding of Equipment: As the Air Force acquires new equipment, force integration plans should adhere to the principle of proportional and concurrent fielding cross the components. 12. Policy Revisions: Integrating units will require manpower and personnel policy revisions. 13. Designated Operational Capability (DOC) Statements: The Air Force should discontinue the practice of separate DOC documents for active and reserve units of the same type and place the integrated units under single DOC statements. 14. Key Leadership Positions: The Air Force should ensure that integrated units are filled competitively by qualified airmen irrespective of component, but key deputy positions should always be filled by an “opposite” component member. 15. Effective Control Measures: The Air Force must establish effective control measures to ensure that both active and reserve component airmen have adequate paths and opportunities for advancement and career development. 16. Awards, Decorations, and Promotions: The integrated chain of command must take special care in managing personnel issues such as awards and decorations, promotions, and assignment opportunities. Recommendation 17. Professional Military Education Positions: Commander, Air University should develop a new baseline for its student and instructor positions to achieve a proportionate representation of the components on faculty and student body by fiscal year 2018. Status (as of February 2016) Open 18. Total Force Competency Standard: The Air Force should develop a Total Force competency standard across all specialties and career fields before the end of fiscal year 2016. 19. Access to Non-Resident Education: The Air Force should ensure that revised curriculum and competency standards are achievable by making non-resident education programs equally accessible to personnel of all components. 20. Increase ARC (Air Reserve Component) Capacity: The Air Force should increase its utilization of the reserve component by increasing the routine employment of reserve component units and individuals to meet recurring rotational requirements. 21. Operational ARC Funding: The Air Force should include in all future budget submissions a specific funding line for operational support by the reserve component. 22. Council of Governors: The Secretary of Defense should revise its agreement with the Council of Governors to enable Air Force leadership to consult directly with the Council when requested, including discussion of pre-decisional information 23. Non-Disclosure Agreements: The Air Force should discontinue the use of non-disclosure agreements in the corporate process. 24. State Adjutants General: The Air Force should continue to advance current informal means for engaging with The Adjutants General. 25. Cyberspace Airmen: The Air Force should fill much of the demand for Cyberspace career 26. Space Domain: The Air Force should build more reserve component opportunities in the 27. GIISR (Global Integrated Intelligence, Surveillance, and Reconnaissance) Billets: The Air Force should integrate all of its new GIISR units, and the preponderance of new billets should be for the reserve component. 28. Special Operations: The Air Force should increase reserve component presence in Special Operations through greater integration. 29. ICBM (Inter Continental Ballistic Missile) Mission: As a pilot program, the Air Force should expand reserve component contributions to the ICBM mission. 30. Instructor Pilots: The Air Force should replace some of the 1,800 active instructor pilots with prior-service volunteers from the Air Reserve Component who would not rotate back to operational squadrons. 31. Homeland Security and Disaster Assistance: The President should direct the Departments of Defense and Homeland Security to develop national requirements for Homeland Security and Disaster Assistance. 32. Homeland Defense and DSCA (Defense Support to Civil Authorities): DOD Air Force should treat Homeland Defense and Defense Support to Civil Authorities as real priorities. 33. Duty Statuses: Congress should reduce the number of separate duty statuses from more 34. Integrated Personnel Management: The Air Force should unify personnel management for all three components under a single integrated organization. Recommendation Air Force Integrated Pay and Personnel System (AF-IPPS): The Air Force should accelerate the development of an integrated pay and personnel system. PERSTEMPO (Personnel Tempo) Metric: The Air Force should use a single metric to measure the personnel tempo and stress on its active and reserve forces. Non-Deployment PERSTEMPO: DOD should update the definition of a non-deployment PERSTEMPO event for the reserve component to include situations where the reserve component member is away from a civilian job or attendance at school. PERSTEMPO and AF-IPPS: The Air Force should include PERSTEMPO accounting in AF-IPPS. Continuum of Service: The Air Force should develop a pilot project for the implementation of Continuum of Service. Active Duty Service Commitments: The Air Force should revise the rules for current active duty service commitments to enable members to meet the commitment in some combination of active, guard, and reserve service. Multiple Career Track Options: The Air Force should develop a new service construct consisting of multiple career track options. “Up or Out”: Congress should amend restrictive aspects of current statutes that mandate “up or out” career management policies. This report (1) evaluates the extent to which the Air Force has made progress in implementing the commission’s recommendations and (2) describes how the Air Force has assessed the potential for increasing the proportion of reserve to active component forces as discussed in the commission’s report. To evaluate the extent to which the Air Force has made progress in implementing the commission’s recommendations, we reviewed Air Force documents, such as briefings to the Executive Committee on the status of the commission’s recommendations, the Executive Committee Charter, and a draft template designed by the Air Force’s Total Force Continuum office for describing milestones, tasks, and other details related to implementing each group of recommendations and individual recommendations. We interviewed Air Force officials to understand their approach to manage and provide oversight for implementing the commission’s recommendations. We also reviewed Executive Committee minutes that documented discussions on the status of implementing the commission’s recommendations and documented decisions to close recommendations as implemented. We interviewed Air Force officials and team leaders to understand what actions they had taken to implement selected recommendations. We selected a non-probability sample of seven of the 36 open recommendations by identifying those with the following attributes: (1) the Air Force had identified implementing the recommendations as facing challenges; (2) the reason for limited progress was not clear based on a review of Air Force status briefings; (3) at least one recommendation had a team leader from the reserve component; and (4) multiple recommendations where the implementation was led by the same team. While the descriptions of implementing the non-probability sample of seven selected recommendations cannot be projected to all 36 open recommendations, they do illustrate the Air Force’s original process for implementing the commission’s recommendations. In addition, we analyzed the implementation plans that had been developed under the Air Force’s original approach for 3, inter-related recommendations to determine the extent to which these documents incorporated leading practices such as milestones, tasks, and performance measures. We evaluated these plans because, at the time we conducted our analysis, the Air Force had not developed any plans under its revised approach. Also, Air Force officials said that plans for these 3 recommendations were the only implementation plans that had been developed under the original approach. The documents constituting the implementation plan for the 3, inter-related recommendations were not contained in one consolidated document but instead consisted of multiple documents, such as action plans and other guidance-type documents that were identified by Air Force officials as implementation plans. For purposes of our analysis, we refer to these multiple documents for each of the 3 recommendations as “implementation plans”. We compared the implementation plans for the 3 recommendations with leading practices for program management that included the use of milestones and tasks. We derived these practices by reviewing a combination of Air Force guidance, industry practices, and our prior work. We also compared the implementation plans with leading practices on performance measures which we derived from a combination of federal internal control standards, prior GAO work on performance measurement and planning; the Government Performance and Results Act (GPRA)—as updated by the GPRA Modernization Act of 2010; related guidance from the Office of Management and Budget; and Air Force guidance on business case analyses. Our prior work identified ten attributes of effective performance measures. Of the ten attributes associated with effective performance measures, we selected seven attributes against which to evaluate performance measures in the Air Force’s three implementation plans. We excluded the attributes for “government-wide priorities”, “core program activities,” and “linkage”, since we judged these attributes less relevant to the Air Force’s plans. Excluding these attributes would still yield a sound assessment. The seven attributes we selected would provide comprehensive information, over time, on how well the Air Force was progressing with its plans to implement the recommendations and identify areas for increased focus. In our scorecard analyses, two GAO analysts independently conducted analyses of performance measures, milestones, and tasks described in each implementation plan. Any disagreements between the two assessments were discussed and reconciled by a third analyst. In our scorecard analysis of milestones and tasks, we determined that implementation plans included milestones when we were able to identify milestones in the plans. We determined that the implementation plans “addressed” inclusion of tasks tied to milestones if each task had a milestone associated with it. An implementation plan “partially addressed” inclusion of tasks tied to milestones if some, but not all of the tasks had an associated milestone. Finally, an implementation plan “did not address” inclusion of tasks tied to milestones if we determined that none of the tasks had a milestone tied to it. In our scorecard analysis of performance measures, we determined that a performance measure “addressed” an attribute when it included all elements of the attribute, even if it lacked specificity and details and could thus be improved upon. A performance measure “partially addressed” an attribute when it included more than one, but not all, elements of the attribute. Consequently, our designation of “partially addressed,” may include a wide variation of, from one to six, demonstrated attributes. A performance measure “did not address” an attribute when it did not include or discuss any elements of the attribute, and/or any implicit references were either too vague or general to permit assessment. For the attribute “limited overlap” we determined that the attribute was “not applicable” if the implementation of the commission recommendation was not dependent on implementation of another recommendation. To supplement this analysis and gain further insight into issues of strategic import, we also interviewed cognizant officials from the Air Force team implementing the 3, inter-related recommendations that have implementation plans, team leaders and members for seven recommendations, representatives from the Air Force Reserves, Air National Guard, and Air Force headquarters staff from the Total Force Continuum office within the Headquarters Air Force Strategic Plans and Programs Directorate, and the office of the Assistant Secretary of the Air Force for Financial Management and Comptroller. To describe how the Air Force has assessed the potential for increasing the proportion of reserve to active component forces as discussed in the commission’s report, we identified the scope, content, and process for the Air Force’s High Velocity Analysis (referred to in the report as the force mix option process) which is the process the Air Force developed to identify and assess force mix options. We first identified the scope of the Air Force analysis by reviewing documentation such as the Air Force analysis schedule and interviewing Air Force officials (from the Strategic Plans and Programs Directorate and the office of the Assistant Secretary of the Air Force, Financial Management and Comptroller). This work provided information on how individual analyses were developed and which aircraft and mission areas were analyzed. Since the Air Force had developed 55 Excel workbooks for analyzing 44 aircraft types and mission areas as of December 2015, we selected a non-probability sample to serve as illustrative examples and learn how the workbooks were built, identify data inputs and their sources, and identify who verifies data inputs, how data are verified, and how the workbooks are used to develop force mix options. From the universe of 44 aircraft types and mission areas, we selected three analyses that had the following attributes: the analysis was complete and not on-going; one that was for a combat mission, one that was for a combat support mission; and one that had a “deploy in place” mission. The three analyses we selected were: bombers, personnel recovery, and intercontinental ballistic mission forces. We did not trace a sample of the data in each workbook back to its original source documents or verify workbook formulas since we were not assessing the accuracy or reliability of the data or analyses. We did review documentation showing that the Air Force has steps in its process for stakeholders to review and modify workbook inputs. While the details of the three analyses we sampled cannot be projected to all 55 workbooks, the sample did enable us to describe the overall force mix option process, including the type of information presented to Air Force leadership. Next, we reviewed examples of documentation of the results of the force mix option process such as examples of briefings presented to senior Air Force leadership to determine whether assumptions and limitations were presented to decision makers. Finally, we interviewed Air Force officials to understand how the Air Force has generally used the results to inform the budget development process and we reviewed an Air Force report and minutes from an Air Force leadership meeting to identify examples. We conducted this performance audit from June 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Air Force aligned 21 of the commission’s 42 recommendations with four of the six objectives in the Air Force Human Capital Annex of the Strategic Master Plan. The Human Capital Annex objectives are: (1) Attracting and Recruiting; (2) Developing the Force; (3) Talent Management (4) Retaining Ready, Resilient Airmen and Families; (5) Agile, Inclusive and Innovative Institutions; and (6) One Air Force. According to Air Force officials, none of the commission’s recommendations aligned with two of the Human Capital objectives— Attracting and Recruiting or Retaining Ready, Resilient Airmen and Families. The Air Force categorized recommendations that do not fall within the Human Capital Annex objectives into a separate group that will be managed by the general officers from the Total Force Continuum Office. Table 3 below lists the commission’s recommendations that are aligned with four of the Human Capital Annex objectives as well as those that will be managed by the Total Force Continuum office. In addition to the contact named above, the following staff members make key contributions to this report: Michael Ferren, Assistant Director, Brenda M. Waterfield, Krislin M. Bolling, Grant Sutton, Barbara Wooten, Patricia Farrell Donahue, Anne Stevens, and Mike Shaughnessy. | In January 2014, the National Commission on the Structure of the Air Force (commission) issued its report, which included 42 recommendations for improving how the Air Force manages its total force. The report also discussed the feasibility of shifting 36,600 personnel from the active to the reserve component and estimated that doing so could save $2 billion annually. Senate Report 114-49 included a provision for GAO to review matters related to the Air Force's efforts to implement the commission's recommendations. This report (1) evaluates the extent to which the Air Force has made progress in implementing the commission's recommendations and (2) describes how the Air Force has assessed the potential for increasing the proportion of reserve to active component forces. GAO reviewed documentation of the Air Force's efforts and compared Air Force implementation plans with leading practices for program management derived from the public and private sectors and GAO's prior work. GAO also reviewed documentation and interviewed officials in order to describe the Air Force's process for assessing its active and reserve component mix. As of February 2016, the Air Force had made limited progress in implementing the commission's recommendations—it had closed 6 recommendations and had taken action to revise its approach for managing implementation of the remaining 36 open recommendations. Air Force officials encountered challenges as they began implementing the commission's recommendations which the revised approach may address. For example, the Air Force had difficulty coordinating across components and offices and coordinating among teams working on inter-related recommendations. Under the revised approach, the Air Force has grouped related recommendations together and placed responsibility for each group under senior officials to improve coordination. According to Air Force officials, the revised approach requires development of milestones and tasks for each recommendation but does not require development of performance measures. Federal internal control standards, leading program management practices, and GAO's prior work have shown that performance measures which contain key attributes—such as baseline and trend data—can help managers monitor progress toward achieving program goals and identify areas for corrective actions. Since the revised approach was not fully in place as of January 2016, the Air Force had not developed complete implementation plans with milestones, tasks, and performance measures to monitor and oversee progress on the remaining 36 open recommendations. Under its original management approach, the Air Force had developed implementation plans for 3 recommendations. These plans generally contained milestones and tasks but were incomplete, since they did not consistently include performance measures that were clear, measurable, or contained a baseline from which implementation progress could be measured. While the Air Force's revised approach includes some positive steps, it is new, its effectiveness is unknown, and it does not require performance measures to gauge progress. Without complete implementation plans that include performance measures which reflect the key attributes, the Air Force will continue to lack important information to monitor progress and assess whether performance is meeting expectations for the 36 recommendations that are still open. Several of the commission's recommendations related to the feasibility of shifting a portion of the active to the reserve component forces. The Air Force has assessed potential changes to its force mix using a process it developed for this purpose. The process combines quantitative and qualitative analysis with stakeholder input and judgment to identify options for changing the mix of active and reserve component forces. These options are then presented to senior Air Force leaders for their consideration, and the leaders' decisions inform the planning phase of the budget development process. To support its process, the Air Force has developed customized, classified data analyses for 44 aircraft types and mission areas. The Air Force finished these analyses in December 2015, and officials said the results informed planning for the fiscal year 2018 budget. However, since the budget development and execution cycle spans up to four years, it is not clear at this time how many of the proposed force mix options may ultimately be implemented. GAO recommends that, for the 36 remaining open commission recommendations, the Air Force develop complete implementation plans that include performance measures. The Air Force agreed with GAO's recommendation. |
In 1996, federal law required the development of an automated entry and exit control system to match arrival and departure records for foreign nationals entering and leaving the United States, and to enable identification of overstays. Subsequently, the Immigration and Naturalization Service Data Management Improvement Act of 2000 required implementation of an integrated entry and exit data system for foreign nationals. The system was to provide access to and integrate arrival and departure data that are authorized or required to be created or collected under law and are in an electronic format in certain databases, such as those used at POEs and U.S. consulates abroad, and assist in identifying nonimmigrant visa overstays. In 2003, DHS initiated the US- VISIT program to develop a comprehensive entry and exit system to collect biometric data from foreign nationals entering or exiting the country through POEs. In 2004, US-VISIT initiated the first step of this program by collecting biometric data on foreign nationals entering the United States at 115 airports and 14 seaports. The Intelligence Reform and Terrorism Prevention Act of 2004 required the Secretary of Homeland Security to develop a plan to accelerate full implementation of an automated biometric entry and exit data system that matches available information provided by foreign nationals upon their arrival in and departure from the United States. In fiscal year 2016, Congress reiterated its requirement for DHS to submit a plan to implement a biometric entry and exit capability and established a funding mechanism available to the Secretary of Homeland Security beginning in fiscal year 2017 to develop and implement a biometric entry and exit system. Specifically, fifty percent of amounts collected pursuant to temporary fee increases for L-1 and H-1B visas, which began in fiscal year 2016 and will expire at the end of fiscal year 2025, up to a total of $1 billion, shall be deposited into the 9-11 Response and Biometric Exit Account for DHS to implement the biometric entry and exit data system. Since 2009, DHS has been exploring various biometric exit capabilities through laboratory and field testing. For instance, in 2009, the legacy US- VISIT program, in partnership with CBP and the Transportation Security Administration (TSA), deployed two biometric exit pilot programs in U.S. airports. In 2014, CBP also collaborated with the DHS Science and Technology Directorate to test possible biometric solutions in simulated operational conditions, using the results to inform subsequent CBP biometric efforts. Although this effort informed later biometric exit pilot programs, it did not test potential biometric capabilities in a real-world setting. Figure 1 demonstrates key actions taken by Congress and DHS to pursue biometric entry and exit capabilities from 1996 through 2016. Federal law also requires that DHS implement a program to collect data, for each fiscal year, regarding the total number of foreign visitors who overstayed their lawful admission period in the United States; and submit an annual report to Congress providing numerical estimates of the number of foreign nationals from each country in each nonimmigrant classification who remained in the country beyond their authorized period of stay. Two DHS components—CBP and ICE—are primarily responsible for collecting and maintaining overstay data, issuing reports on overstays, and addressing potential overstays, as shown in table 1. Further, the Office of Biometric Identity Management, which was created in March 2013 and replaced US-VISIT, is responsible for storing biometric data for the department. In addition, the State Department is responsible for ensuring that individuals who have previously overstayed and are ineligible for a visa do not receive one when applying for a visa to the United States at consular offices overseas. Since our 2013 report on DHS’s efforts to develop a biometric exit capability, CBP has conducted four biometric pilot programs intended to inform the acquisition of a biometric exit system: (1) Biometric Exit Mobile Air Test (BE-Mobile) (mobile fingerprint reader); (2) 1 to 1 Facial Comparison Project (facial recognition upon entry); (3) Departure Information Systems Test (matching on-site facial scan to a gallery of photographs); and (4) the Southwest Border Pedestrian Exit Field Test (face and iris scanning at pedestrian exit from the United States). As of September 2016, CBP reported that it had obligated nearly $13 million developing, implementing, and evaluating these four pilot programs, as illustrated in table 2. This amount includes over $3.5 million to install secure Wi-Fi systems for supporting the BE-Mobile pilots in three of the test airports. BE-Mobile. In summer of 2015, CBP began deploying the BE-Mobile pilot at the 10 highest international passenger volume airports in the United States. Under this pilot, CBP officers stationed at the passenger loading bridges of selected flights used a handheld mobile device to scan fingerprints and passports for certain foreign nationals at the time of departure from the United States at identified airports. The biometric and biographic data collected by the BE-Mobile device was matched against data such as departures and arrivals in the United States, criminal histories, and visa status. The goal of the BE-Mobile pilot was to evaluate the viability of using the technology to collect biometric exit data from a sample population on randomly selected flights, as well as to evaluate the viability of implementing biometric exit in conjunction with CBP’s outbound enforcement operations. See figure 2 for a representation of the BE-Mobile device. During our observations, CBP officials noted that the BE-Mobile pilot demonstrated that while the technology can effectively capture biometric data and match that data against DHS databases, it requires too much time and manpower to be a solution for biometric exit capabilities on all flights departing the United States—a statement consistent with our own observations of BE-Mobile at two airports. According to CBP officials, the pilot program established that the manifest data provided by the carriers in the Advance Passenger Information System (APIS) are accurate and reliable. However, using BE-Mobile to screen outgoing passengers is time consuming. For example, in May 2016, we observed CBP’s use of BE- Mobile at Los Angeles International Airport to capture biometric information on categories of passengers included in the pilot (approximately 75 individuals) on one flight departing for Mexico. We observed that the outbound process using BE-Mobile took six CBP officers—who constitute a CBP tactical operations team—approximately 45 minutes to complete. CBP officials noted that when those six officers are conducting outbound enforcement operations using BE-Mobile, they are not conducting inspections or operations on inbound cargo and passengers. CBP officials also noted that the BE-Mobile system provided some benefits to the officers checking foreign nationals leaving the country. For instance, BE-Mobile allows officers to identify travelers who have suspicious travel histories or other derogatory information for further investigation by searching databases that detail individuals’ travel patterns, visa status, and criminal records. For instance, during our observation of the program at Los Angeles International Airport, one officer used BE-Mobile to identify an individual whose travel pattern may have indicated drug trafficking, so the individual was examined more closely before being allowed to board a plane to Mexico. Similarly, BE- Mobile can identify travelers exiting the country who do not have corresponding entry information, indicating that they potentially entered the country without inspection. Finally, BE-Mobile may identify individuals who have overstayed their period of admission, allowing CBP to collect more accurate overstay information. During our observation of BE-Mobile at John F. Kennedy International Airport, an officer identified one traveler as having overstayed a student visa, and noted the violation in the traveler’s Student and Exchange Visitor Information System record. According to CBP officials, CBP is currently maintaining the BE-Mobile program at the original 10 airports as an enforcement tool for use by CBP officers. These officials also said BE-Mobile may be a viable solution for smaller airports with relatively few outbound international flights at which officers could utilize BE-Mobile to obtain biometric information from exiting passengers at times when no international travelers are arriving. 1 to 1 Facial Comparison. Between March and May 2015, CBP tested the 1 to 1 Facial Comparison Project at Dulles International Airport. This pilot was intended to assist CBP officers in confirming the identity of U.S. citizens entering the United States against the travel document being presented. After the conclusion of the pilot program, the technology was deployed for use at both Dulles International Airport and John F. Kennedy International Airport. The technology compares a photograph taken of the U.S. citizen by a CBP officer to the photograph stored on the traveler’s passport chip to assess whether the individual applying for entry into the United States was the same person to whom the U.S. passport was legally issued. Although the capability was tested at entries to the United States, the information gathered through the pilot is intended to also inform the acquisition of a biometric exit capability, according to CBP officials. According to an evaluation conducted by CBP, the results of the pilot showed that biometric facial matching can increase the confidence with which CBP officers verify individuals’ identities without a negative impact to port of entry operations and traveler wait times. When we observed CBP officers at John F. Kennedy International Airport processing passengers using this technology in July 2016, they said that the facial recognition process added approximately 20 to 30 seconds to the processing time for each passenger. However, agency officials stated that the technology is not yet integrated with CBP systems, and will not impact wait times once it is fully integrated. See figure 3 for a representation of the 1 to 1 Facial Recognition use and equipment. Departure Information Systems Test. From June to September 2016, CBP deployed the Departure Information Systems Test pilot at Atlanta’s Hartsfield-Jackson International Airport. The goal of the pilot was to evaluate the effectiveness of biometric facial recognition matching of a real-time photograph of an individual to a gallery of facial images stored in a database. Photographs of travelers taken during boarding were compared against photographs taken previously (U.S. passport, U.S. visa, and DHS encounters) that had been stored in the gallery based on names on the outbound flight manifest. The biometric capture device includes a camera, document reader and display tablet. The display tablet instructs travelers to present their boarding pass to the reader as they approach the unit. Once the boarding pass is scanned, an image of the traveler’s face is captured. The system matches the photograph to all images in the gallery of photographs, at which point a green light appears and the traveler is instructed to proceed to board the plane. After the flight has departed, the system compares the captured images to the images in the gallery to determine the system’s effectiveness at matching photographs taken to those in the gallery. CBP officials told us that the capability to match one photograph to a gallery of photographs will be critical in developing a biometric exit solution for deployment on a nationwide scale because the agency already has access to one or more photographs on record of each person exiting the country, if they entered legally. As of November 2016, CBP had not yet completed the formal evaluation of the test. They added that a second biometric indicator, such as fingerprints, would also be useful in cases where the facial recognition software cannot match the live image to the images in the gallery. For this pilot test, CBP deployed the capability at one gate and used it to obtain biometric information from passengers on a daily nonstop flight from Atlanta to Tokyo. We observed this capability collecting biometric information from passengers in August 2016. CBP officials told us that this flight was selected because it departed from a gate with ample physical space every day, which allowed CBP to set up its equipment to collect photographs. In addition, the flight departed at a time when few international flights were arriving or departing, so CBP did not have to divert officers from inspecting departing or incoming travelers to operate the pilot. See figure 4 for a representation of the biometric capture device. Southwest Border Pedestrian Exit Field Test. From February to May 2016, CBP initiated a pilot program to test facial and iris scanning technology at the Otay Mesa POE south of San Diego, California. The purpose of the test was to determine if biometric technology could be effectively used in an outdoor land environment without significant impact to operations and wait times, and to determine if collecting biometrics in conjunction with biographic data upon exit will assist CBP in identifying individuals who have overstayed their period of admission. Under this pilot program, CBP collected biographic data from all travelers departing the United States at the Otay Mesa POE and biometrics (facial images and/or iris scans) from certain foreign nationals entering and departing the Otay Mesa POE on foot. To exit the country, travelers scanned their passports at a radio frequency identification-enabled kiosk, as shown in the picture on the left in figure 5. One collection lane was equipped with facial and iris scanning equipment that required the traveler to pause for biometric data collection, as pictured in the middle picture shown in figure 5. Another lane was equipped with technology that collected facial and iris images while the traveler continued through the lane without pausing, as shown in the picture on the right in figure 5. Although CBP had not completed its formal evaluation of the pedestrian exit field test as of November 2016, CBP officials told us the pilot provided information about the physical challenges to implementing face and iris scanning technology at land POEs. The officials noted that the conditions at Otay Mesa POE were “ideal” in terms of space availability and weather conditions compared with other land POEs. Specifically, the Otay Mesa POE had sufficient space to install and operate the kiosks and cameras to collect biometric data from departing pedestrians. In addition, the location generally had favorable weather and climate conditions that were less likely to affect the biometric collection machines stationed outside, though the officials told us that the technology did need to be under a roof or canopy, both to protect it from the rain and to prevent sun glare from affecting the quality of the images captured. However, the officials said that while rain and wind are not a significant issue at Otay Mesa’s location in southern California, other land POE locations, such as those in southern Texas, may experience challenges such as heavy storms or dust. They added that the pilot program had highlighted the need for biometric scanning equipment to be located inside for protection from the elements, but that some land POEs do not have sufficient space for such infrastructure. While DHS has made progress testing and evaluating biometric exit capabilities through the pilot programs described above, DHS continues to face challenges in developing and deploying a biometric exit system, many of which are longstanding. In particular, we and DHS have identified challenges in the areas of planning, infrastructure, and staffing that have affected DHS’s efforts to develop and implement a biometric exit capability. DHS has recognized these challenges and, according to CBP officials, is working to address them as part of its current planning process for a biometric exit system. Under this current process, as of November 2016, CBP plans to implement a biometric exit capability in at least one major airport by 2018, but has not yet finalized the approach it will take to deploy this capability in airports. As a result, it is too early to assess CBP’s current plans and how the department will address the challenges we and DHS have identified. Planning process. We and DHS have identified challenges in CBP’s planning efforts to develop and implement a biometric exit capability. For example, in our July 2013 report we found that DHS had a high-level plan for a biometric air exit capability, but it did not clearly define the steps, time-frames, and milestones needed to develop and implement an evaluation framework, as is standard in project management. As a result, we recommended that DHS establish time-frames and milestones for developing and implementing an evaluation framework to be used in conducting the department’s assessment of biometric exit options. DHS concurred with this recommendation and finalized goals and implemented actions to address it. Specifically, in June 2016, CBP provided us with the evaluation framework as well as expected time-frames and milestones for implementing the biometric exit system. CBP has also previously faced challenges in meeting the timeframes it has identified for deploying a biometric exit capability at airports. For instance, in July 2014, a DHS document stated that the department’s goal was to deploy a biometric exit capability to the top 20 airports—selected by international passenger departure volume—between 2015 and 2018. However, as of November 2016, CBP’s planned timeframe was to begin deployment of a biometric exit capability at one airport by the end of 2018, and additional airports in subsequent months. CBP officials told us this change in schedule was because funds from the appropriation funds established by Congress in the Consolidated Appropriations Act, 2016, to develop and implement the system were not available until October 2016. In November 2016, CBP officials also told us the agency had changed its approach to the biometric exit capability and was working with airlines and airports on strategies for using public/private partnerships to both reduce the cost to taxpayers and give industry more control over how a biometric exit capability is implemented at airport gates. CBP’s previous planned approach had been for CBP to acquire and deploy biometric technology at airports, and to be responsible for collecting biometric information from passengers. Developing a biometric exit system in collaboration with airlines and airports, if implemented, would represent a change in CBP’s acquisition strategy because it would rely on airlines and airports to collect biometric information from passengers by acquiring biometric exit technology, such as cameras to collect facial images or equipment for fingerprinting. CBP would then be responsible for transmitting, storing, and analyzing this biometric information in order to pursue enforcement actions, such as the apprehension of individuals with warrants for their arrests, or recording the presence of individuals who entered the country illegally. Under this scenario, the airlines could integrate this biometric collection process into their existing boarding procedures, potentially resulting in minimal disruption to the flow of passengers during boarding, according to CBP officials. For instance, CBP officials suggested facial images or iris scans could be collected as travelers’ boarding passes are being scanned, and the biometrics could eventually be used in the place of boarding passes. CBP officials said that this new approach did not change the timelines for initial implementation of a biometric exit capability, but officials noted that the approach or approaches selected will affect timelines and costs for future implementation. As of November 2016, CBP officials told us they had not finalized any partnership agreements with airports or airlines providing international service; and the agency cannot complete the planning process, including cost and schedule estimates, until these partnership agreements and key implementation decisions are finalized. Going forward, CBP intends to finalize its plans and approach for developing and implementing a biometric exit capability. Given these considerations, it is too early to assess CBP’s plan for including airlines and airports in the development and implementation of the biometric exit system or the cost to CBP of this system. Infrastructure. We and CBP officials have also identified limitations in infrastructure as a significant challenge to implementing a biometric exit capability at airports as well as at land POEs. For example, CBP officials pointed out that U.S. airports generally do not have designated and secure exit areas for conducting outbound immigration inspections, nor are there checkpoints for travelers to pass through where their departure is recorded by a U.S. immigration officer and where biometric information could be captured. According to CBP officials, for a biometric exit program to be effective, the collection of biometric information must take place at the gate or on the jetway to ensure that the traveler actually departs the country. To address these challenges, CBP intends to use the information gained from the pilot programs to identify biometric exit technology and processes that are effective in the airport environment and minimize the impact on passenger flow and airport operations. At land POEs, there are also longstanding infrastructure and operational challenges to implementing a biometric exit capability to collect traveler information upon departure from the United States. In 2006 we reported that establishing a biometric exit capability at land POEs faced a number of challenges, including space constraints complicated by the logistics of processing high volumes of visitors and associated traffic congestion. For example, travelers may arrive at land POEs on foot or via a variety of vehicles—including cars, trucks, trains, buses, ferries, and bicycles—and many land POEs do not have sufficient space to deploy equipment and staff for obtaining biometric information from individuals leaving the country. Given the current capabilities of biometric capture devices, applying biometric capabilities to vehicle passengers would be more difficult than doing so for those crossing on foot, because, according to CBP officials, biometric capabilities currently available would require all passengers to stop and exit their vehicle to be photographed or scanned. In addition to the large amount of space this process would require, DHS officials stated that it would cause extensive delays at vehicle POEs. CBP officials said they intend to use the information from the pedestrian exit field test at Otay Mesa to inform any future solution. The officials also told us that they entered into an agreement with Oak Ridge National Laboratory beginning in June 2016 to explore options for applying biometric capabilities to vehicle passengers exiting the country. Given these challenges, CBP does not plan to implement a biometric exit capability at land POEs until 2020. Staffing. In addition, CBP officials stated that implementing a biometric exit capability will likely require additional CBP officers at each POE. The biometric exit pilot programs we observed required CBP staff to assist travelers with using the biometric technology and also for any enforcement actions that may be needed. However, CBP officials noted that they are exploring biometric exit capabilities that minimize the involvement of CBP officials, either by having the collection of biometric information done automatically through facial recognition technology or using airline personnel to process passengers. In either case, the CBP officials said that any biometric exit capability will require additional officers to support increased enforcement operations involving individuals departing the country that result from a biometric exit system. For example, individuals with warrants for their arrest may be prevented from departing the country so they can be tried for a crime. CBP officials told us that they have developed staffing estimates for each of the 20 busiest airports in the United States and that the estimates will be reviewed by DHS management and the Office of Management and Budget. However, CBP officials also told us they are still developing enforcement policies and priorities for foreign visitors departing the United States, so it is difficult to determine the extent to which enforcement actions would increase or how many additional CBP officers may be needed at each airport or land port of entry. In January 2016, DHS issued its first report on estimated overstay rates that covered fiscal year 2015, which included some but not all information required by statute. DHS had not previously reported required overstay estimates to Congress on an annual basis, as required, because of DHS and legacy INS concerns about the reliability of the data available on overstays. In April 2011, we reported that DHS officials stated that the department had not reported overstay estimates because it did not have sufficient confidence in the quality of its overstay data. In our July 2013 report, we found that although DHS had taken action to strengthen its overstay data, DHS had not yet validated or tested the reliability of those actions and challenges to reporting reliable overstay data remained. We recommended that DHS assess and document the extent to which the reliability of the data used to develop any overstay estimates has improved and any remaining limitations in how the data can be used. DHS concurred, and in the 2015 overstay report, DHS noted which data were used to compile the overstay estimates in the report and identified limitations with other data sources, thus addressing our recommendation. To identify overstays, CBP matches arrival and departure data on foreign nationals in the Arrival and Departure Information System. These overstays are then checked against other DHS immigration databases to identify persons who have departed the United States or obtained an extension, or other valid immigration status or protection, and thus are not potential overstays. DHS’s fiscal year 2015 overstay report describes expected overstays rates by country for foreign nationals lawfully admitted into the United States for business or pleasure through air and sea POEs and who were supposed to depart the United States in fiscal year 2015, as required. According to the overstay report, 527,127 of the nearly 45 million foreign nationals admitted for business or pleasure through air and sea POEs that were expected to depart the United States in fiscal year 2015 overstayed their period of admission, for a total overstay rate of 1.17 percent (see table 3). This number represents 85 percent of all the foreign visitors who arrived through air and sea POEs and who were expected to depart the country in fiscal year 2015, according to the report. DHS classifies individuals as overstays by matching departure and status change records to arrival records collected during the admission process. DHS distinguishes two groups of foreign visitors who overstayed their period of admission: (1) foreign nationals who are “out of country overstays” because their departure records show they departed the United States after their lawful admission period expired, and (2) foreign nationals who are “suspected in-country overstays” because they have no departure records nor did they obtain an extension, or other valid immigration status or protection, prior to the end of their authorized admission period. For example, 482,781 of the 527,127 foreign visitors who overstayed their period of admission in fiscal year 2015 were suspected in-country overstays because CBP did not have a departure record for them so they appeared to have remained in the country, a suspected in-country overstay rate of 1.07 percent, as illustrated in table 3. However, the DHS overstay report does not include all required information. Specifically, because of data reliability concerns, the overstay report does not include required information on expected departures, overstays, and overstay rates for foreign nationals who entered the country under nonimmigrant visa categories other than for business and pleasure, such as those covering, for example, foreign students and their families (F, M, and J visas). DHS officials noted that the department is working to improve the reliability of the overstay information on foreign nationals who entered the country under student visa categories by, among other things, adding data on each visa holder’s last date of compliance and modernizing the database that contains data on individuals holding student visas. DHS officials stated that the fiscal year 2016 overstay report—which they expect to be issued in early 2017— will include reliable overstays estimates for these foreign student visa categories. The fiscal year 2015 overstay report also did not include information on foreign visitors who entered the United States from Canada and Mexico using land POEs because of unreliable collection of departure data at these POEs. The collection of departure information at land POEs is more difficult than at air and sea POEs because of the lack of electronically captured biographic information of foreign nationals departing the country using land POEs. Specifically, land POEs do not receive information on anticipated arrivals or departures, because travel across these POEs is often on foot or in private vehicles rather than through a transportation company that provides CBP with advance passenger manifests, such as an airline or passenger ship operator. To address this limitation for the land POEs at the northern border specifically, DHS and Canada’s Border Service Agency implemented the Beyond the Border agreement in October 2012 under which they exchange entry records for land crossings between the two countries so that an entry into one is recorded as an exit from the other. However, according to DHS, the southwest border with Mexico does not present the same opportunities as the border with Canada because Mexico’s border infrastructure and data collection capabilities are more limited. As a result, DHS officials noted that they have started discussions with Mexican government officials to set up a land pilot on the Mexican side of the border to capture information from travelers entering Mexico, similar to the information captured and exchanged under the Beyond the Border initiative with Canada. DHS has also been exploring other methods and technologies for obtaining biographic and biometric data from travelers departing the country through land POEs on the border with Mexico, such as the pedestrian biometric exit field test at the Otay Mesa border crossing near San Diego discussed earlier. DHS expects to start reporting overstay rates for foreign visitors who entered the country through land POEs in the fiscal year 2017 report. Since our July 2013 report, DHS has not changed its enforcement priorities with respect to potential overstays, focusing its enforcement actions on individuals that may pose a national security or public safety risk. Within ICE, the Homeland Security Investigations (HSI) Counterterrorism and Criminal Exploitation Unit (CTCEU) oversees the program for investigating nonimmigrant visa violators who may pose a national security risk. CTCEU receives system-generated lists of overstay leads from the Arrival and Departure Information System, which is produced by matching arrival and departure data on foreign nationals. On a weekly basis, CTCEU also receives information on overstay leads from the Student and Exchange Visitor Information System on foreign students who have remained in the United States beyond their authorized periods of admission. Once these leads are received, CTCEU analysts then determine whether the individuals from these lists meet DHS’s overstay enforcement priorities based on national security and public safety criteria. CTCEU prioritizes investigation of overstay leads based on the perceived risk each lead is likely to pose to national security and public safety as determined by threat analysis. To prioritize investigation of overstay leads, CTCEU uses an automated system to assign each overstay lead a priority ranking based on threat intelligence information. For the records that meet DHS’s overstay enforcement priorities, CTCEU analysts then conduct manual searches of other databases to determine, for example, if the individual obtained an extension, or other valid immigration status or protection and is therefore not an overstay. For their priority records, if CTCEU analysts are unable to identify evidence of a change in status or a departure, they search for the nonimmigrant’s current U.S. address, and if they are able to identify an address, they send the lead to the relevant HSI field office for investigation. In addition, starting in 2014, CTCEU has been using social media and open source information to locate and track individuals. HSI field offices only investigate a case if they have derogatory information on an individual or if they have viable location information, according to ICE officials. CTCEU sends overstay leads who do not meet DHS’s enforcement priorities to ICE’s ERO for potential enforcement action. According to ICE data, between fiscal years 2013 and 2015, CTCEU reviewed approximately 2.7 million overstay leads, and closed 871,463 leads (about 32 percent) through their vetting process (see table 4). The most common reasons for closure were subsequent departure from the United States or pending immigration benefits. CTCEU had 155,182 overstays leads (about 6 percent) open under continuous monitoring. CTCEU sent 26,982 overstay leads (about 1 percent) to HSI field offices for further investigation because they represented national security or public safety threats. The majority of overstay leads CTCEU reviewed during this time period did not meet DHS’s priorities and were referred to ERO for potential enforcement action (over 60 percent). According to ICE data, CTCEU’s overstay enforcement efforts resulted in about 5,000 administrative arrests, 369 criminal arrests, 333 indictments, and 300 convictions from fiscal year 2013 through fiscal year 2015, as shown in table 5. Of the more than 1.6 million overstay lead referrals sent by CTCEU to ERO between fiscal years 2013 and 2015, ERO did not send any leads to field offices for further investigation or enforcement action. ERO conducts reviews of the CTCEU overstay leads referrals to determine whether they meet DHS’s priorities and maintains the records of these referrals for reference and periodic reviews. ERO did not send any of the CTCEU’s referrals to ICE field offices for enforcement action because the overstay lead referrals did not meet DHS’s enforcement priorities. Specifically, ERO officials said that in most cases, overstay lead referrals do not have criminal convictions required to classify the referrals as DHS’s enforcement priority. As a result, based on current DHS’s priorities, ICE’s overstay enforcement efforts are limited to potential overstays involving national security and public safety threats. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. In its comments, DHS stated that CBP has made progress testing and evaluating biometric exit capabilities since our work was completed. DHS noted that it plans to develop a biometric exit system at airports based on the facial recognition pilot program conducted at the Hartsfield-Jackson Atlanta International Airport. In December 2016, this system became the Biometric Verification System, which is biometrically confirming selected travelers departing the United States at the airport. DHS further noted that CBP will continue to test different facial image capture devices and work with airlines to more fully integrate the Biometric Verification System into the airline boarding process at additional airport locations. To support this effort, DHS reported that CBP has made progress in developing the documentation needed to designate Biometric Exit as a "program of record," indicating that it has met certain thresholds to allow for procurements and execution of funds. DHS also reported that CBP is developing a spend plan describing the execution of up to $1 billion that will accrue pursuant to the Consolidated Appropriations Act, 2016, for implementation of a biometric entry and exit system. In addition, DHS reported that CBP drafted an overstay report for fiscal year 2016, which it expects to release by the end of February 2017. DHS stated that the report addresses over 95 percent of all nonimmigrants admitted by air to the United States, and will include student visa categories. DHS stated that it plans to report these numbers annually, as required. We are sending copies of this report to the Secretary of Homeland Security, appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contact named above, Adam Hoffman (Assistant Director), Juan Tapia-Videla (Analyst-In-Charge), Miriam Hill, Megan Erwin, Sasan J. “Jon” Najmi, Amanda Miller, Adam Vogt, Eric Hauswirth, Richard Hung, and Nathan Tranquilli made significant contributions to this report. | Each year, millions of visitors come to the United States. Overstays are individuals lawfully admitted on a temporary basis who then remain in the country beyond their authorized period of admission. DHS has primary responsibility for identifying and addressing overstays. In 2004, DHS was required to develop a plan to accelerate full implementation of an automated biometric entry-exit system. In various reports, GAO identified a range of long-standing challenges DHS has faced in its efforts to develop and deploy this capability and to use entry and exit data to identify overstays. For example, in 2013, GAO recommended that DHS establish timeframes and milestones for a biometric air exit evaluation framework and document the reliability of its overstay data. DHS concurred with the recommendations and addressed them. GAO was asked to review DHS's progress in developing a biometric exit capability. This report examines DHS's efforts since GAO's 2013 report to (1) develop and implement a biometric exit capability and (2) report on and address potential overstays. GAO reviewed statutes and DHS documents and interviewed DHS officials about biometric exit capability development and overstays reporting. GAO also observed four biometric entry and exit pilot programs and analyzed overstay data for fiscal years 2013 through 2015 (most recent at time of review). GAO is not making any new recommendations in this report. In its comments, DHS stated that it is using a biometric verification system to confirm the departure of selected travelers at one airport and plans to release its 2016 overstays report in late February 2017. Since GAO's 2013 report on the Department of Homeland Security's (DHS) efforts to develop a biometric exit capability to collect biometric data, such as fingerprints, from individuals exiting the United States, U.S. Customs and Border Protection (CBP) has conducted four pilot programs to inform the development and implementation of a biometric exit system. CBP has made progress in testing biometric exit capabilities, but various longstanding planning, infrastructure, and staffing challenges continue to affect CBP's efforts to develop and implement a biometric exit system. CBP set 2018 as the goal for initial implementation of a biometric exit capability in at least one airport and is working with airlines and airports on strategies for using public/private partnerships to reduce costs and give industry more control over how a biometric exit capability is implemented at airport gates. However, the agency cannot complete the planning process until these partnership agreements and implementation decisions are finalized. As GAO has also previously reported, infrastructure limitations are a challenge to implementing a biometric air exit capability. For example, CBP noted that U.S. airports generally do not have outbound designated secure areas for exiting travelers where biometric information could be captured by U.S. immigration officers. CBP recognizes these challenges and intends to use the information gained from the pilot programs to identify biometric exit technology and staffing processes that are effective in the airport environment. As CBP is in the process of finalizing its approach, it is too early to assess the agency's plans for developing and implementing a biometric exit capability and the extent to which those plans will address identified challenges. Since GAO's 2013 report, DHS has reported some required information on potential overstays—individuals who are admitted to the country under a specific nonimmigrant category but exceed their lawful admission period—and has not changed its enforcement priorities for potential overstays. In January 2016, DHS issued its first report on estimated overstay rates that covered fiscal year 2015, which included some but not all overstay information required by statute. The report described expected overstay rates by country for foreign visitors lawfully admitted for business or pleasure through air and sea ports of entry (POE) who were expected to depart the United States in fiscal year 2015. However, because of data reliability concerns, the report did not include all information required by law, including overstay rates for foreign visitors who entered the country through land POEs or under other nonimmigrant categories. According to DHS officials, the report for fiscal year 2016 will include reliable overstay rates on foreign students arriving through air and sea POEs. DHS expects to start reporting overstay rates for foreign visitors who entered the country through land POEs in the report for fiscal year 2017. DHS has improved overstays reporting by, among other things, enhancing the systems it uses to process entry and exit biographic data for potential overstays and is exploring options to collect information from land POEs. DHS has not changed its enforcement priorities with respect to potential overstays, continuing to focus its enforcement actions on individuals that may pose a national security or public safety risk. Specifically, in fiscal years 2013 through 2015, the agency reviewed approximately 2.7 million overstay leads and sent 26,982 of them (about 1 percent) to field offices for further investigation. |
The Army, Navy, and Defense Logistics Agency approach to complying with Department of Defense requirements was to develop economic models to determine the maximum amount of inactive inventory they could retain. The Air Force did not develop an economic retention model; rather it employed a model based on historical usage patterns. Most recently, all the components lowered their maximum levels (referred to as ceilings) for items in economic retention during the 1990s to help them meet inventory reduction targets. The Department requires that an economic model calculating whether to retain an inventory item or dispose of it should compare the cost of retention with the cost of disposal and select the option with the least cost. As the amount of inactive stock increases, the cost of retention increases (more items cost more to hold) and the cost of disposal decreases (with greater amounts of an item on hand, the likelihood of having to repurchase it becomes less). Equilibrium is reached when the additional cost of retention equals that of disposal. This equilibrium level of inventory is the economic retention level—the largest retention amount of an inactive item that can be justified by economic analysis. Any amount of inventory over this level would become eligible for retention on a contingency basis or disposal. Management of the Department’s secondary inventory is a complex process, and effectively implementing systemwide improved management approaches has been a long-standing challenge for the Department.However, following the end of the Cold War, the Department recognized that it had unnecessarily high inventories as a result of major reductions in its force structure, and it directed the components to take action to lower inventory levels, including economic retention inventory. In response to this directive, during 1994, the Army, Navy, and Defense Logistics Agency chose to lower economic retention inventory levels by placing a preset maximum retention level that generally fell below the levels calculated by the models. The Air Force lowered its maximum level in 1996. The Department reported its economic retention inventory fell from about $13.8 billion to about $9.4 billion (about 32 percent) between fiscal year 1991 and 1999 (see fig. 1) and by 40 percent when adjusted for inflation. The latter part of this period covers the years when lower ceilings were put in place by the services and Defense Logistics Agency. Although all three services reported reductions in economic retention inventory levels during this time, the changes were uneven. The Air Force reported the smallest decrease in economic retention inventory—from about $5.1 billion to about $4.5 billion (about 12 percent). In contrast, the Army reported a decrease from about $1.3 billion to about $600 million (about 54 percent) and the Navy from about $5.6 billion to about $1.6 billion (over 71 percent). On the other hand, the Defense Logistics Agency reported an increase in economic retention inventory from about $1.8 billion to about $2.7 billion (about 50 percent) as a result of a decision to consolidate management responsibility for all consumable items within the Agency. According to Department data, if it had not required the services to transfer management of large quantities of inventory to the Defense Logistics Agency during the 1990s, the Defense Logistics Agency inventory would have decreased by over a billion dollars. (See app. I for more details on how the composition of secondary and retention inventory changed between fiscal year 1991 and 1999.) It is also important to note that, in response to a congressional mandate, the Department is conducting an independent study of secondary inventory and parts shortages as required by section 362 of the National Defense Authorization Act for Fiscal Year 2000. As described in the authorization act, the independent study is to include analyses of the appropriate levels and use of secondary inventories, alternative methods for disposing of excess inventory, and the application of private sector cost calculation models in determining the cost of secondary inventory storage. According to the Department, the study is scheduled for completion by the end of August 2001. The services and Defense Logistics Agency do not have sound analytic support for their approaches for determining whether they are holding the correct amount of items in economic retention inventory. While components (with the exception of the Air Force) developed individual economic models designed to place inactive inventory in economic retention status as early as 1969, they have not used them since 1994. Instead, components lowered maximum levels of inventory that could be held (referred to as ceilings) to make economic retention determinations that would help achieve agency inventory reduction goals. Agency information indicates that this approach has helped to reduce inventory levels. However, the components have not annually reviewed the analyses used to support their economic retention decisions, as required by the Department, and therefore have no assurance that the inventories held in economic retention status are appropriate. Although the components were not using their economic retention models to manage inventory levels, we did generally review the models. We noted that factors and assumptions within the models differed and were not current without explanation for the differences. Given the differences we found in these models, such as varied and outdated cost factors and assumptions and the lack of support for these factors and assumptions, it is uncertain whether they determine an accurate retention level. A methodology for determining how many items are to be kept in economic retention status, which the Department of Defense requires, should compare the costs of retention to the costs of disposal of an inventory item. The Army, Navy, and Defense Logistics Agency developed models designed for making economic decisions that consider the costs of retention and the costs of disposal. The Air Force does not compare retention and disposal costs in determining economic retention inventories. Instead, the Air Force employs historical usage levels to determine economic retention levels. The components developed their models in different ways and use different factors and assumptions in their models without detailed documentation. The amount of inventory to hold in economic retention varies by model depending on the factors and assumptions in the models. The economic retention models of the three components generally meet the Department’s requirements to compare retention costs to disposal costs, but the factors and assumptions in them vary across the components. For example, the Army and Navy use a factor of obsolescence and the Defense Logistics Agency does not. In addition, the values for similar factors used in economic retention models varied among components. For example, the Army’s value for loss rates (loss through theft or decay) is 1 percent, the Navy’s value is 4 percent, and the Defense Logistics Agency does not use a loss rate. Furthermore, the components have not appropriately updated their model assumptions. For example, prescribed discount rates, a key assumption in all economic retention models, vary across components’ models. The Navy uses a 10-percent discount rate and the Army and the Defense Logistics Agency use a 7-percent rate for computing net present values. Neither value matches discount rates recommended by the Office of Management and Budget for a cost-effectiveness analysis, which is the decision to retain or dispose of inventory. The rates are also inconsistent with our guidance on discount rates. For example, for the year 2000, the Office of Management and Budget discount rate for a cost-effectiveness analysis was 4.2 percent for a 30-year analysis; the rate was 2.9 percent in 1999. Further information about component economic models can be found in appendix II of this report. The use of a maximum level to manage economic retention stocks (commonly called ceilings by components) makes the Department vulnerable to retaining items when it is uneconomical to retain them or disposing of items that are economical to keep. The components judgmentally developed their ceilings for economic retention inventory, which differ and have yielded lower levels of economic retention inventory than the levels calculated by the economic retention models. During 1994-96, the components established different ceilings for items in economic retention. A ceiling imposes an upper constraint of years of demand—the quantity needed on an annual basis to meet requirements— on how much inactive inventory can be retained. Prior to the 1990s, the components had set ceilings on retention inventory that varied but that generally exceeded higher levels determined by their economic retention models. While component ceilings varied in the span of years of demand, they also varied in the total years of inventory covered. The Army ceiling is applied to inventories above active inventory requirements. The Air Force, Navy, and Defense Logistics Agency ceilings apply to their entire inventory requirements, including active inventory. Table 1 summarizes the maximum levels used by each component during the 1990s. According to component officials the components now hold fewer items in economic retention status because the ceilings established in the 1990s replaced the levels calculated by economic retention models. The components’ more stringent ceilings result in smaller inventory level determinations than would be calculated with economic retention models. Figure 2 illustrates the level a model might calculate and how a more constrained ceiling would override it. For example, a component’s ceiling for economic retention stock is 6 years of demand above requirements. If the model computed an economic retention limit (e.g., 8 years of demand of 25 items a year—200 items) that exceeded the maximum level (e.g., 6 years of demand—150 items), the ceiling (150 items) would be selected as the retention level. The additional stock (50 items) would be moved to other inactive categories (contingency or reutilization status) or be disposed of. Components have not reviewed their economic retention models or the judgmentally established ceilings annually, although required to do so by the Department. The lack of the components’ reviews of their retention analyses, either their models or ceilings, raises further questions about the cost-effectiveness of either approach. The Department’s required annual reviews are to focus on improving analyses supporting retention decisions by accounting for potential upward or downward trends in demand and/or the uncertainties of predicting future long-term demand based on historical data and improved estimates of costs used in retention decision-making. All components have conducted studies of their economic retention analyses, and initiatives were undertaken to meet inventory reduction goals during the 1990s, such as constraining economic retention model determinations with ceilings. However, no studies had been conducted to determine if economic retention models could be used to establish appropriate Department inventory levels on an economic basis, rather than through the use of ceilings. There was little documentation available supporting the selection of the factors and assumptions used in economic retention models, such as obsolescence rates and discount rates. The various factors and assumptions might be appropriate, but in most cases the components lacked documentation describing why they were selected for use. Furthermore, the limited information that is available about the impact of ceilings indicates that they could be causing uneconomical disposals. For example, the Army Audit Agency produced a March 2000 study that suggested that, while the Agency found no instances in which an item was disposed of when it was more economical to retain it, it concluded, based on the statements of inventory managers, that maximum levels resulted in the disposal of items that were still economical to retain. Inventory managers also told us that maximum levels also caused disposal of items that were still economical to retain, but components were unable to provide data about repurchases of disposed items because of limitations in component databases. Components (other than the Air Force) have developed models designed to make economic retention decisions. However, none of the components currently use their economic retention models. Instead, they and the Air Force use ceilings to limit the amount of economic retention inventory they hold. Components have not properly documented their approaches to economic retention decisions. For example, there are variations in common model factors and assumptions lack consistency and are not current. In addition, the Department did not have sound analytical support for the maximum levels they currently use. As a result, the components cannot currently depend on their models or ceilings to determine retention inventory levels without review and improvement. They also have not annually reviewed their approaches. However, the Department is currently conducting a mandated study of secondary inventory and spare parts shortages. Because the ceilings lack analytical support, and the model factors and assumptions vary without explanation and are out of date, the Department cannot provide reasonable assurance that inventories held in economic retention are the right amount. We recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force and the Director of the Defense Logistics Agency, in consultation with the Under Secretary of Defense for Acquisition, Technology, and Logistics, to take the following actions: Taking into consideration the results of the congressionally mandated study, establish milestones for reviewing current and recently used approaches for making decisions on whether to hold or dispose of economic retention inventory to identify actions needed to develop and implement appropriate approaches to economic retention decisions. Annually review their approaches to meet Department regulations to ensure that they have sound support for determining economic retention inventory levels. In written comments on a draft of this report, the Department of Defense partially concurred with our recommendations. The Department agreed with our recommendation that its components needed to annually review the appropriateness of their economic retention inventory levels. Regarding our draft recommendation that the components review their approaches to determining economic retention levels, the Department stated that the need for components’ further review of retention decisions would be determined after the completion of an independent study in August 2001 of secondary inventory and parts shortages required by section 362 of the National Defense Authorization Act for Fiscal Year 2000. The results of the study could affect component approaches to making economic retention decisions. The study is to report on such issues as the appropriate levels of secondary inventories, alternative methods for disposing of excess inventory, and the application of private sector cost calculation models in determining the cost of secondary inventory storage. Our recommendation focuses on reviewing the approaches for setting economic retention levels to minimize the possibility of inappropriate retention or disposal decisions. How the study results will affect how the Department should address our recommendation remains to be seen. Therefore, we modified our draft recommendation. We are now recommending that the Department establish milestones for taking action on the study’s recommendations as they relate to the economic retention issues that we raised in this report. The Department’s comments are reprinted in appendix IV. We are sending copies of this report to the appropriate congressional committees; to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable Joseph W. Westphal, Acting Secretary of the Army; the Honorable Robert B. Pirie, Jr., Acting Secretary of the Navy; the Honorable Lawrence J. Delaney, Acting Secretary of the Air Force; Lieutenant General Henry T. Glisson, Director, Defense Logistics Agency; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. Please contact me at (202) 512-8412 if you have any questions. Key contributors to this report were Charles Patton, Donald Snyder, Scott Pettis, and Charles Perdue. After the end of the Cold War and a subsequent reduction in force structure, the Department of Defense recognized that it had high inventory levels and took action to reduce them. Department secondary inventory levels were reduced by about a third between 1991 and 1999 when adjusted for inflation. All four components reviewed reduced their secondary inventory levels during this time, although the level of reduction varied by component. The percentage of secondary inventory held in economic retention status also was reduced during this time, although there were fluctuations in inventory levels among components. There were also sizable shifts in the amount of consumable and reparable inventories managed by each component. The following sections provide details of Department and component secondary and economic retention inventory trends. The Department reported reductions in its secondary inventory during the 1990s. The amount of secondary inventory fell from $88 billion in 1991 to $64 billion in 1999 (a decline of 27 percent and a 36-percent reduction when adjusted for inflation). As shown in figure 3, component performance in reducing inventory levels was uneven but generally reflected the Department-wide performance. All four components reduced their secondary inventory levels during this time, but component performance in reducing inventory levels varied. The three services reported reductions in their levels of secondary inventories by amounts ranging from 24 to 38 percent. The Defense Logistics Agency realized a slightly smaller decrease of about 10 percent, primarily because the Department transferred management of many consumable inventories from the services to the Defense Logistics Agency during this time. This transfer helped the services meet their inventory reduction goals. In 1999, the portions of secondary inventory managed by components varied, with the Air Force managing the largest share of the Department’s secondary inventory, as figure 4 shows. The percent of component secondary inventories held in economic retention status Department-wide fell slightly, from 15.7 percent to 14.7 percent between fiscal year 1991 and 1999. However, there were sizeable shifts in the percent of secondary inventory held in economic retention status by each component. For example, shares of secondary inventory held in economic retention status by the Army and Navy fell while the Air Force and Defense Logistics Agency portions increased (see fig. 5). There was variance in the share of the Department’s economic retention inventories managed by Defense components, with the Air Force holding almost half, as figure 6 shows. Currently, the services manage mostly reparable items because management responsibilities for nearly all consumable items have been transferred to the Defense Logistics Agency. On September 30, 1999, over 34 percent ($3.2 billion) of the Department’s consumable inventory was in economic retention status. The Defense Logistics Agency managed $2.7 billion (85 percent) of the Department’s consumable economic retention inventory. The percentage of total Department consumable and reparable stock in economic retention status as of September 30, 1999, varied widely by component. Department consumable inventory in economic retention status fell by about 42 percent between September 30, 1993, and September 30, 1999. Each service reduced its consumable inventory in economic retention by an average of 73 percent between 1993 and 1999. However, the Defense Logistics Agency’s consumable economic retention inventories were reduced by only 26 percent during this time. By 1993, the Defense Logistics Agency consumable retention inventory had more than doubled to $3.7 billion from its 1991 amount of $1.8 billion due to the Consumable Item Transfer program. The 1993 amount subsequently fell to $2.7 billion in 1999. After the transfer of most of the services’ consumable items, the Defense Logistics Agency held about 85 percent of the Department’s consumable inventory (see fig. 7). The Air Force’s share of Department of Defense reparable items in economic retention status increased between 1993 and 1999. The Air Force reparable inventory in economic retention status increased by $526 million (about 14 percent) during this time. In contrast, the Navy reduced reparable inventory in economic retention status by 55 percent and the Army reduced its inventory in economic retention status by 63 percent between 1993 and 1999. By 1999 the Air Force managed over 70 percent of total Department reparable inventories in economic retention—up from 48 percent in 1993 (see fig. 8). The following sections provide more detail on the general factors and specific models the components use to determine economic retention limits and the ceilings the components use. An economic retention methodology for determining how many items are to be kept in economic retention status, which the Department of Defense requires, should compare the costs of retention to the costs of disposal of an inventory item. In practice, the components configure their models in different ways and often use distinct values for the factors and assumptions in their models. The solution the models seek, the economic retention limit, depends on how the models are set up and factors used in the models when calculating retention and disposal costs. The principal factors in calculating the cost of retention include the estimated costs of operating storage facilities based on the estimated value of the item in storage and the probability of damage, theft, or loss. The component responsible for managing the item estimates the cost factors by taking a percentage of the value, or price, of the item. Components typically determine that the appropriate price to use in this calculation is the last price paid for an item. If an item needs repair, this cost is also added. The percentages for all the cost factors are then multiplied by an item's price and totaled to obtain the estimated cost of retention. The end result of these calculations is that the model's estimate of retention cost rises for each additional unit of an item held. Another consideration for determining retention cost is the possibility that an item may become obsolete. An item could be replaced by a new item or the weapon system the item supports could be discontinued. To address this possibility, the value of the item is reduced based on estimates by the managing component of the likelihood it would not be used. For example, if an item is valued at $100 and there is a 0.9 probability that the item would be needed in a future period, then the $100 value of the item would be multiplied by 0.9 to yield a future value of $90, based on the 10-percent possibility of obsolescence. Almost two-thirds of the Department’s economic retention stock consists of reparable items. Some reparable items in inactive status are in disrepair or need to be upgraded. In these cases, a repair cost is factored into the calculation, which is typically based on the item’s estimated value in its unrepaired condition. Components also subtract an item’s repair cost from the price of a new item. Additionally, delivery costs (based on a percentage of the item’s price) for moving the item to a repair facility are estimated and added to the computation of retention costs. A key factor in determining the disposal cost of an item is the chance of having to repurchase the item later. To calculate the disposal cost, a component estimates the probability of repurchase and typically multiplies it by the item’s estimated price, which is the estimated purchase price in the future. Some components include estimates of administrative costs associated with procuring an item (such as contract costs) and the cost of starting up a production line to manufacture the item. Estimated future prices can be higher than original purchase prices because additional costs could be incurred, such as the administrative costs involved in contracting with a manufacturer, and setup costs. However, items may become obsolete in the future. This estimated reduction in demand caused by obsolescence is also factored into the retention cost. Since the probability of repurchase declines as the amount held increases, the estimated disposal cost of the item declines as more items are held. Additional adjustments to the disposal cost include estimated expenses associated with disposal (such as transportation to a disposal facility and conversion of sensitive items for sale) and administrative costs incurred in selling an item. These costs are estimated as a percentage of the price of an item and added to the disposal cost. Finally, an item’s estimated salvage value would be deducted from the estimated disposal cost to obtain the net disposal cost for each unit. The time value of money is also considered in determining whether to retain an item because the costs associated with storing or disposing of an item are incurred in the future. The costs and benefits of retaining or disposing of an item are computed on a present value basis (their value today) to equalize all of the costs and benefits. To make the present value calculation, a discount (interest) rate is used to factor in the time value of money. For example, if the model computes a benefit in one year that equals one dollar, discounting by 10 percent would make that present discounted value today equal to 91 cents. Economic retention models use additional factors to determine whether to accept items offered for return from depots or other activities. If a return is accepted, then the estimated cost associated with that action would have to be included. For example, estimated transportation costs to return an item would be added to estimated storage costs and compared to estimated disposal costs to decide whether to accept the return of an item. The Army, the Navy, and the Defense Logistics Agency use economic models to calculate the economic retention limit for each item they analyze; the components then apply the ceiling limits described in this report. The Air Force model does not compare the costs of retention to the costs of disposal. Instead, the Air Force employs an item’s usage history and its retention ceiling to determine whether to dispose of an item. The specific factors and assumptions the three components’ models use and the Air Force computation are described in the following sections. The Army model compares retention and disposal values to determine the economic retention limit. In calculating disposal costs, the Army model includes any potential disposal value. In calculating retention costs, the model includes the benefits of retention, i.e., not having to reorder an item. The model also includes the probability of obsolescence. The Army model allows for the possibility of obsolescence to vary depending on the age of the item. The Army uses a 7-percent discount rate to calculate the present value of disposal and retention values. The Navy model compares what it calls holding costs to buy back costs to determine the economic retention requirement. Holding costs include a disposal value minus disposal transportation costs. The model also factors in an inventory loss rate and an obsolescence rate in computing storage costs. The major factors of buy back costs are those costs that would be incurred if the item had to be reordered and include item replacement price, the administrative costs of procurement, and manufacturer's set-up costs to manufacture unique items (if needed). The model computes buy back and retention costs on a present value basis (using a 10-percent discount rate) because reorder costs would be incurred in the future. If the item is reparable, repair costs (if appropriate) are included in the computation to account for the costs that would be incurred for making the item ready for issue. The Defense Logistics Agency's holding costs are determined by computing storage costs times the probability of needing the material. Holding costs are compared to expected disposal costs. If holding costs are less than disposal costs (expected possible costs of reprocurement minus disposal proceeds), the item should be retained. The model can also evaluate whether an item manager should accept a return and pay the associated costs to bring an item back from retail operations or deny the return and take the chance of reprocurement in the future. The costs of accepting a return are added to the expected holding costs, and the sum is compared to the expected cost to dispose and repurchase. The analysis is done on a present value basis using a discount rate of 7 percent because both costs involve potential outlays in the future. The Air Force model does not compute the costs of disposal or storage as required by the Department. The Air Force computation of economic retention limits is based on an item's usage history. The current Air Force model distinguishes between inactive items that have experienced zero demand over a 5-year span, and have no foreseeable demand, and demand items—those with a demand history over the past 5 years. If an item has been categorized as inactive, the retention limit is based on a non-demand amount—called either an insurance or numeric stock objective limit. A maximum of five items is held for numeric stock objective items and a maximum of two items is held for insurance items. According to component officials and other experts, most Air Force items are in these categories. For items that are categorized as demand items, the economic retention limit is determined through several steps. The system computes gross retention levels first by adding 9 years of demand to peak requirements (the highest quarterly demand level for the item from the prior 25 quarters). Second, the minimum retention level for stock needing repairs (unserviceable items) is computed by adding (1) condemned stock for the past 9 years, (2) stock held at supply facilities and (3) peak requirements identical to the factors computed for the gross retention level. Third, these two levels are then adjusted to establish maximum (gross) and minimum inventory levels, which are compared to the number of assets in inventory. Unserviceable items are retained only when the number of serviceable assets falls below the minimum retention level. Table 2 summarizes the factors and assumptions used in component economic retention models. Economic retention models and ceilings are to be applied only to items with predictable and steady annual demand. Limited-demand items— those with no or infrequent demand in a year—are also held in economic retention status. They are retained even though the probability of demand is low because the lack of the items would seriously hamper the operational readiness of a weapon system. None of these inventory items would be considered for disposal unless they exceeded Department inventory requirements for limited-demand items. The majority of items held in economic retention status fell into limited- demand status. Table 3 details the number and dollar value of items that the retention models and ceilings would analyze and the items that would not be analyzed due to low or no demand. As discussed in the body of this report, components used a variety of maximum levels of demand (commonly called ceilings by components) to further reduce inventory levels (low-demand items noted in table 3 above were not affected by ceiling limits). Three of the four components reviewed imposed ceilings on the number of years of demand for all items, and most inventory control points in the fourth component (the Army) also used ceilings. Through most of the 1990s, the Army set a different ceiling on how many years of demand can make up the retention limit, depending on the characteristics of the item. According to Army officials, in 1992 the Army adjusted its factors to a ceiling of 7 years of demand above requirements for essential items (items that directly support critical parts of a weapon system). The Army used a ceiling of 4 years of demand above requirements for nonessential items (items that do not directly support the critical parts of a weapon system). In December 1999 the Army revised the ceiling limits to 7 years of demand above requirements for serviceable reparables, 6 years of demand above requirements for unserviceable reparables, and 5 years of demand above requirements for all other items. At the same time the Army decided to let each inventory control point set its own ceilings. An Army official at one of its five inventory control points stated that his inventory control point is using the determinations of the economic retention model for setting retention inventory limits without the ceilings. Army officials at the other four inventory control points stated that they have not changed their ceilings. In 1994, the Navy chose to implement ceilings of years of demand depending on the expected future growth of the weapon system. The Navy computation system for inventory calculates a single inventory limit that includes active requirements and inactive inventory. As a result, the demand ceiling (4, 8, or 12 years of total demand) applies to the entire computed demand limit—not just economic retention limits. The ceiling is 12 years of total demand for items supporting new weapon systems. A ceiling of 8 years of total demand is applied to items supporting weapons systems in common use and 4 years of total demand is applied to items supporting weapons systems approaching obsolescence. The Defense Logistics Agency ceiling for items in economic retention, implemented in 1994, is 6 years of total demand. The prior ceiling on years of demand for inactive inventory was 10 years of total demand. The Air Force ceiling for inventory in economic retention status was reduced from 20 years of total maximum demand to 13 years of total maximum demand in 1996, according to Air Force officials. To determine component approaches to making economic retention decisions, we interviewed Department and component officials who managed economic retention models and ceilings in the Defense Logistics Agency, the Army, the Navy, and the Air Force and reviewed documents they provided. Because the Marine Corps had less than 1 percent of the Department’s total economic retention inventory, we did not include the Marine Corps in our analysis. We did not independently test or validate the component models or inventory systems. We interviewed officials and gathered relevant documentation for our review at the following locations: The Office of the Deputy Under Secretary of Defense (Logistics), Washington, D.C. The Defense Logistics Agency, Ft. Belvoir, Virginia. The Army Material Systems Analysis Activity, Aberdeen, Maryland. The Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio. The Navy Inventory Control Point, Mechanicsburg, Pennsylvania. To support our analysis of trends in component inventory levels, we analyzed information in the Department of Defense Supply System Inventory Report for September 30 of fiscal years 1991 through 1999. Our analysis included data for fiscal year 1991 through 1999 for trends in economic retention. We reviewed data from these time periods because the Department changed the way dollar estimates of inventory were calculated in 1990; inventory data was reported consistently in the Supply System Inventory Report from 1991 through 1999. Component officials stated that there were differences in the categories of inventory components reported in the Supply System Inventory Report. We did not adjust the Supply System Inventory Report data for this analysis. We analyzed data for reparable and consumable items for fiscal years 1993 through 1999 because the Department first separately reported data on reparable and consumable items in its 1993 Supply System Inventory Report. Our review focused on the models and items with predictable demand. We did not analyze items with limited demand (see app. II). We did not independently test or validate the accuracy of the data reported in these inventory systems. We adjusted Department data to account for inflation as part of our analysis of component performance in realizing inventory reductions over time. We conducted our work between May 1999 and January 2001 in accordance with generally accepted government auditing standards. | As of September 1999, the Department of Defense (DOD) reported that it owned secondary inventory worth about $64 billion and that $9.4 billion of that inventory is more economical to retain than to dispose of and possibly repurchase later. This report focuses on whether DOD's economic retention decisions are sound. GAO found that military components (other than the Air Force) have developed models to help make economic retention decisions on secondary inventory. However, none of the components now use their economic retention models. Instead, they and the Air Force use ceilings to limit the amount of economic retention inventory they hold. Components have not properly documented their approaches to economic retention decisions. For example, common model factors vary and assumptions are inconsistent and out of date. In addition, DOD lacked sound analytical support for the maximum levels it now uses. As a result, the components cannot depend on their models or ceilings to determine retention inventory levels without review and improvement. They also have not reviewed their approaches annually. As a result, the Department does not have a sound basis for its approach to manage items held in economic retention status. Consequently, the Department cannot guarantee that inventories held in economic retention are the right amount. |
Since passage of the Higher Education Act of 1965, a broad array of federal student aid programs, including loan programs, have been available to help students finance the cost of postsecondary education. Currently, several types of federal student loans administered by Education make up the largest portion of student loans in the United States. Four types of federal student loans are available to borrowers and have features that make them attractive for financing higher education. For example, borrowers are not required to begin repaying most federal student loans until after graduation or when their enrollment status significantly changes. Further, interest rates on federal student loans are generally lower than other financing alternatives, and the programs offer repayment flexibilities if borrowers are unable to meet scheduled payments. As outlined in table 1, the four federal loan programs differ in that interest rates may or may not be subsidized based on the borrower’s financial need, loans may be designed to specifically serve undergraduate or graduate and professional students, and loans may serve to consolidate and extend the payment term of multiple federal student loans. Education administers federal student loans and is generally responsible for, among other duties, disbursing, reconciling, and accounting for student loans and other student aid, and tracking loan repayment. Although no other federal agencies have a direct role in administering student loans, other agencies may become involved in the event that a borrower fails to make repayment. For example, Education may coordinate with Treasury to withhold a portion of federal payments to borrowers who have not made scheduled loan repayments. Such payment withholding, known as administrative offset, can affect payments to individuals by various federal agencies. Offsets of income tax refunds would involve the Internal Revenue Service and offsets of Social Security retirement or disability benefits would involve the Social Security Administration. Student loans are also available from private lenders, such as banks and credit unions. Private loans differ from federal loans in that they may require repayment to begin while the student is still in school, they generally have higher interest rates, and the rates may be variable as opposed to fixed. Unlike federal student loans, private student loans may be more difficult to obtain for some potential borrowers because they may require an established credit record and the cost of the loan may depend on the borrower’s credit score. Private student loans are a relatively small part of the student loan market, accounting for 10 to 15 percent of outstanding student loan debt—about $150 billion—as of January 2012. Older Americans—that is, Americans in or approaching retirement—may hold student loans for a number of reasons. For example, because such loans may have a 10- to 25-year repayment horizon, older Americans may still be paying off student loan debt that they accrued when they were much younger. They may also have accrued student loan debt in the course of mid- or late-career re-training and education. In addition, they may be holding loans taken out for the education of their children, either through co-signing or through Parent PLUS loans. According to the 2010 SCF, households headed by older individuals are much less likely than those headed by younger individuals to hold student loan debt. As of 2010, about 3 percent of surveyed households headed by people 65 and older—representing approximately 706,000 households—reported some student loan debt. This compares to 24 percent for households headed by those under 65—representing about 22 million households. The decrease in the incidence of student loan debt is even more marked for households headed by the oldest individuals—only 1 percent of those aged 75 or over reported such debt. Although few older Americans have student debt, a majority of households headed by those 65 and older reported having some kind of debt, most commonly home mortgage debt, followed by credit card and vehicle debt. While the incidence of all debt types declines for households headed by those 65 and over, the incidence of student loan debt declines at a much faster rate. For example, the incidence of student loan debt for the 65-74 age group is less than half of that for the 55-64 age group—4 percent compared to 9 percent. In contrast, the incidence of any type of debt for the older age group is only about 17 percent less than the younger age group—65 percent compared to 78 percent. While relatively few older Americans have student debt, data from the SCF suggest that the size of such debt among older Americans may be comparable to that of younger age groups. Among all age groups, the median balances of student and other types of debt are dwarfed by median balances of home mortgage debt. Estimates of median student debt balances for the various age groups range from about $11,400 to about $15,500. Median mortgage debt, in contrast, ranges from about $58,000 to $136,000 among the same groups. Among households headed by those 65 and older, the estimated median student debt was about $12,000, and among those 64 and younger, about $13,000. However, given the small number of older households with student loans, it is important to note that the estimate of student debt for the 65 and older age category is a general approximation. From 2004 to 2010, an increasing percentage of households in all SCF age groups have taken on student loan debt (see fig. 1). During the same period, the percentage of households headed by individuals 65 to 74 who had some student loan debt increased from just under 1 percent in 2004 to about 4 percent in 2010—more than a four-fold increase. The percentage of households having student loan debt in the two youngest age household categories—those 18 to 34 and those 35 to 44—were and remain much larger. Their rate of increase in that type of debt from 2004 to 2010 was comparatively modest—about 40 percent and 80 percent, respectively. Data from Education’s NSLDS also indicates substantial growth in aggregate federal student loan balances among individuals in all age groups, especially older Americans. Aggregate federal student loan debt levels more than doubled overall, rising from slightly more than $400 billion in 2005 to more than $1 trillion dollars in 2013 (see fig. 2). The total outstanding student debt for those 65 and older was and remains a small fraction of total outstanding federal student debt. However, debt for this age group grew at a much faster pace—from about $2.8 billion in 2005 to about $18.2 billion in 2013, more than a six-fold increase. Although the Direct PLUS Loan program offers parents of dependent undergraduate students the opportunity to borrow to finance their children’s education, data from Education suggests that most federal student loan debt held by older Americans was not incurred on behalf of dependents, but primarily for their own education. About 27 percent of loan balances held by the 50 to 64 age group was for their children, while about 73 percent was for the borrower’s own education (see fig. 3). For age groups 65 and over, the percentages of outstanding loan balances attributable to the borrowers’ own education are even higher. For those aged 65-74, 82 percent of the outstanding student loan balances was for the individual’s own education, and for the 75 and older group, this was true of 83 percent. Because information on the age of the loans was not readily available to us, we do not know the extent to which the debt of older Americans is attributable to recently originated loans or loans originated many years ago during their prime educational years. Although older borrowers hold a small portion of federal student loans, they hold defaulted loans at a higher rate than younger borrowers. Individuals 65 or older held 1 percent of outstanding federal student loans in fiscal year 2013 (see fig. 4). However, 12 percent of federal student loans held by individuals age 25 to 49 were in default, while 27 percent of loans held by individuals 65 to 74 were in default, and more than half of loans held by individuals 75 or older were in default. According to Education data, older borrowers are in default on federal student loans for their children’s education less frequently than they are in default on federal student loans for themselves. Specifically, in fiscal year 2013, 17 percent of Parent PLUS loans held by borrowers ages 65 to 74 were in default, while 30 percent of loans for their own education were in default. Delinquent borrowers—those who have missed one or more payments— have more than a year to resume payments or negotiate revised terms before facing collection procedures. During the initial year of delinquency for Direct Loans, Education and the loan servicers make a number of attempts to help borrowers arrange for payments and avert default (see fig. 5). After the loan has been delinquent for 425 days (approximately 14 months), Education determines whether to take actions intended to recover the money it is owed. These actions can have serious financial consequences for the borrower. For example, Education may charge collection costs up to 25 percent of the interest and principal of the loan. Interest on the debt continues to accumulate during the delinquency and default period. In addition, Education may garnish wages or initiate litigation. Education may also send the loan to a collection agency. The defaulted debt may also be reported to consumer reporting agencies, which can result in lower credit ratings for the borrower. Lower credit ratings may affect access to credit or rental property, increase interest rates on credit, affect employers’ decisions to hire, or increase insurance costs in some states. At 425 days, Education may also begin the process to send newly defaulted loans to Treasury to recover the debt by withholding a portion of federal payments—known as offset. Federal payments subject to offset include wages for federal employees, tax refunds, and certain monthly federal benefits, such as Social Security retirement and disability payments. Each year, Education prepares a list of newly defaulted loans for Treasury offset. In 2014, newly defaulted debt must have been more than 425 days delinquent before the July deadline so that it can be sent to Treasury in December. If the debt becomes 425 days delinquent after the cutoff, it would be sent the following December (2015). Thus, the defaulted debt is sent to Treasury 3 to 15 months after 425 days of delinquency—between 17 and 29 months from the last date of payment on the loan (see fig. 6). According to Education officials, loans that have not been paid off are annually recertified as being eligible for offset. After a defaulted loan is certified as eligible for offset to Treasury, certain payments, such as any available tax refunds, are offset immediately, without prior notice to the debtor. Borrowers with monthly benefits available for offset are informed by mail that their benefits will be offset in 60 days and again 30 days before the offset is taken, allowing borrowers an additional 2 months to resume payment on their loan before offset occurs. Treasury assesses a fee for each offset transaction, which is subtracted from the offset payment. Other federal agencies may charge additional fees for each transaction depending on the type of payment being offset. For fiscal year 2014, Treasury’s fee was $15 per offset and other agency fees were up to $27. Federal tax refunds are the source for more than 90 percent of offset collection for federal student loan debt. Offsets from Social Security benefits represented roughly $150 million in 2013 or less than 7 percent of the more than $2.2 billion in federal payments offset by Treasury. The number of borrowers, especially older borrowers, who have experienced offsets to Social Security retirement, survivor, or disability benefits to repay defaulted federal student loans has increased over time. In 2002, the first full year during which Social Security benefits were offset by Treasury, about 31,000 borrowers were affected. Of those borrowers, about 19 percent (6,000) were 65 or older. From 2002 through 2013, the number of borrowers whose Social Security benefits were offset has increased roughly 400 percent, and the number of borrowers 65 and over increased roughly 500 percent (see fig. 7). In 2013, Social Security benefits for about 155,000 people were offset and about 36,000 of those were 65 and over. The majority of Social Security benefit offsets for federal student loan debt are from disability benefits rather than retirement or survivor benefits. In 2013, 70.6 percent of defaulted borrowers (105,000) whose Social Security benefits were offset received disability benefits (see fig. 8). That year, about $97 million was collected through offset from disability benefits. For borrowers 65 and over, the majority of Social Security offsets are from retirement and survivor benefits because Social Security disability benefits automatically convert to retirement benefits at the beneficiary’s full retirement age, currently 66. About 33,000 borrowers age 65 and over had Social Security retirement or survivor benefits offset in 2013 to repay defaulted federal student loans. The amount of money collected from Social Security benefit offsets to repay defaulted federal student loans has also increased, but the average amount offset on a monthly basis per borrower has remained relatively stable. Treasury collected about $24 million in offsets from Social Security benefits in 2002, about $108 million in 2012, and about $150 million in 2013. However, over this period, the average amount offset on a monthly basis per borrower rose only slightly, from around $120 in the early 2000s to a little over $130 in 2013. Although there are statutory limits under the Debt Collection Improvement Act of 1996 (DCIA) on the amount that Treasury can offset from monthly federal benefits, the current limits may result in monthly benefits below the poverty threshold for certain defaulted borrowers. Social Security benefits are designed to replace, in part, the income lost due to retirement, disability, or death of the worker. The DCIA set a level of $750 per month below which monthly benefits cannot be offset. In 1998, the amount of allowable offset was effectively modified under regulations, to the lesser of 15 percent of the total benefit or the amount by which the benefit exceeds $750 per month, thus creating a standard more favorable to defaulted borrowers. For example, a borrower with a Social Security benefit of $1,000 per month would have an offset of $150, because that is the lesser of 15 percent of the benefit—$150—and the amount of the benefit over $750, which is $250. This offset would leave the borrower with a monthly benefit of $850, which is below the poverty threshold for 2013. The statutory limit of $750 for an offset was above the poverty threshold when it was set, in 1998. The offset limits have not changed since 1998, and the $750 limit represented about 81 percent of the poverty threshold for a single adult 65 and over in 2013. If the $750 limit had been indexed to the changes in the poverty threshold since 1998, in 2013 it would have increased by 43 percent or to about $1,073 (see fig. 9). Borrowers with benefits below this amount would not have been offset. Indexing monthly benefit offset limits to the poverty threshold can prevent some older borrowers from having offsets, but would also reduce Education’s recoveries from Social Security offsets. If the offset limit had been indexed to match the rate of increase in the poverty threshold, in 2013, 68 percent of all borrowers whose Social Security benefits were offset for federal student loan debt would have kept their entire benefit, including 61 percent of borrowers 65 and older. An additional 15 percent of all borrowers and borrowers age 65 and older would have kept more of their benefits in that year. However, indexing the offset limit would have reduced the amount collected from Social Security benefits by approximately 60 percent or $94 million in 2013, representing about 4.2 percent of all dollars offset from all sources by Treasury for student loan debt in that year. In conclusion, student loan debt and default are problems for a small percentage of older Americans. As the amount of student loan debt held by Americans age 65 and older increases, the prospect of default implies greater financial risk for those at or near retirement—especially for those dependent on Social Security. Most of the federal student loan debt held by older Americans was obtained for their own education, suggesting that it may have been held for an extended period, accumulating interest over time. The Social Security retirement or survivor benefits of about 33,000 Americans age 65 and older were reduced through offset to meet defaulted federal student loan obligations in 2013. Because the statutory limit at which monthly benefits can be offset has not been updated since it was enacted in 1998, certain defaulted borrowers with offsets are left with Social Security benefits below the poverty threshold. As the baby boomers continue to move into retirement, the number of older Americans with defaulted loans will only continue to increase. This creates the potential for an unpleasant surprise for some, as their benefits are offset and they face the possibility of a less secure retirement. Chairman Nelson, Ranking Member Collins, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-7215 or jeszeckc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are listed in appendix II. To understand the extent to which older Americans have outstanding student loans and how this debt compares to other types of debt, we relied primarily on data from the Federal Reserve Board’s Survey of Consumer Finances (SCF), a survey that is conducted once every 3 years and gathers detailed information on the finances of U.S. families. SCF data is publicly available and was extracted from the Federal Reserve Board’s website. Specifically, we analyzed data from the 2004, 2007, and 2010 SCF to provide a range of information, including an overview of the percentage of families, by age of head of household, with student debt over time. An important limitation of the data is that debt, including student loans, is reported at the household level. As a result, the SCF survey responses represent the debt of the entire household, not just the head of household. Therefore, it is possible that for some households headed by older Americans, the reported student debt is actually held by children or other dependents that are still members of the household, rather than the older head of household. The NSLDS is a comprehensive national database maintained by the Department of Education that is used to readily access student aid data and track money appropriated as aid for postsecondary students. The database includes data on the various federal student loan programs. this testimony. The NSLDS data we obtained allows us to count federal student loans and loan balances, but not the number of borrowers. Although Education maintains borrower-level data, we were only able to obtain aggregated data by loan type during the course of our analyses. These summary tables reported that about 1,000 of the more than 6 million Parent PLUS loans outstanding in fiscal year 2013 were to borrowers under the age of 25. According to Education, these cases resulted from a reporting issue where the date of birth of the Parent PLUS borrower was the reported as being the same as that of the student. We excluded these Parent PLUS loans from our analysis. To understand the extent to which older Americans defaulted on federal student loans and the possible consequences of such a default, we relied on a number of data sources and agency documents related to federal student loans. To determine the extent to which older Americans have defaulted on federal student loans, we used data from the NSLDS summary tables we received from Education. To evaluate the consequences of default, we reviewed federal law, regulations, and agency documents describing the collection process for defaulted federal student loans, including offset of federal benefit payments through the Treasury Offset Program (TOP). We interviewed officials at Education involved in managing defaulted federal student loans, and we interviewed officials at Treasury, Education, and the Social Security Administration about the process for offsetting Social Security retirement, survivor, and disability benefits through the TOP. In addition, we interviewed Education officials and reviewed relevant documentation regarding Education’s debt collection policies and procedures; however, we did not audit their compliance with statutory requirements related to these activities. To describe the extent of Treasury offset of Social Security Administration benefits for federal student loan debt, we used data on offset payments from the TOP for fiscal years 2001 through 2014. We assessed the reliability of this data by reviewing data documentation, conducting electronic testing on the data, and interviewing Treasury staff about the reliability of this data. Because the TOP data does not include the age of borrowers or the type of Social Security benefits that were offset, we obtained such information for relevant borrowers from the Social Security Administration’s Master Beneficiary Record using a match on Social Security numbers. We assessed the reliability of the data by reviewing data documentation, obtaining the computer code used to match borrowers to the Master Beneficiary Record, and interviewing the staff at the Social Security Administration who conducted the match. We determined that the data elements we used were sufficiently reliable for the purposes of this testimony. For about 0.25 percent of borrowers, we were unable to determine the borrower’s age, and we excluded these borrowers from age-based analyses. For about 4.3 percent of offset payments, we were unable to determine the type of benefit, and we excluded these payments from the analysis of the type of benefit that was offset. To evaluate the extent to which Social Security benefits would have been offset if the $750 limit below which benefits are not offset had been adjusted for changes in the poverty threshold, we analyzed TOP data to impute the amount of a monthly Social Security benefit payment from the size of the offset that was taken from that payment. We then applied a modified set of rules for calculating an offset amount to the imputed benefit, changing the $750 limit to $1,072.50—the adjusted amount for the limit had it been indexed to the poverty threshold—to estimate, for 2013, whether the monthly benefit payment would have been offset had the offset limit increased at the rate of the poverty threshold. In addition to the contact named above, Michael Collins (Assistant Director), Michael Hartnett, Margaret Weber, Christopher Zbrozek, and Lacy Vong made key contributions to this testimony. In addition, key support was provided by Ben Bolitzer, Ying Long, John Mingus, Mimi Nguyen, Kathleen van Gelder, Walter Vance, and Craig Winslow. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recent studies have indicated that many Americans may be approaching their retirement years with increasing levels of various kinds of debt. Such debt can reduce net worth and income, thereby diminishing overall retirement financial security. Student loan debt held by older Americans can be especially daunting because unlike other types of debt, it generally cannot be discharged in bankruptcy. GAO was asked to examine the extent of student loan debt held by older Americans and the implications of default. This testimony provides information on: (1) the extent to which older Americans have outstanding student loans and how this debt compares to other types of debt, and (2) the extent to which older Americans have defaulted on federal student loans and the possible consequences of default. To address these issues, GAO obtained and analyzed relevant data from the Federal Reserve Board's Survey of Consumer Finances as well as data from the Department of the Treasury, the Social Security Administration, and the Department of Education. GAO also reviewed key agency documents and interviewed knowledgeable staff. Comparatively few households headed by older Americans carry student debt compared to other types of debt, such as for mortgages and credit cards. GAO's analysis of the data from the Survey of Consumer Finances reveals that about 3 percent of households headed by those aged 65 or older—about 706,000 households—carry student loan debt. This compares to about 24 percent of households headed by those aged 64 or younger—22 million households. Compared to student loan debt, those 65 and older are much more likely to carry other types of debt. For example, about 29 percent carry home mortgage debt and 27 percent carry credit card debt. Still, student debt among older American households has grown in recent years. The percentage of households headed by those aged 65 to 74 having student debt grew from about 1 percent in 2004 to about 4 percent in 2010. While those 65 and older account for a small fraction of the total amount of outstanding federal student debt, the outstanding federal student debt for this age group grew from about $2.8 billion in 2005 to about $18.2 billion in 2013. Available data indicate that borrowers 65 and older hold defaulted federal student loans at a much higher rate, which can leave some retirees with income below the poverty threshold. Although federal student loans can remain unpaid for more than a year before the Department of Education takes aggressive action to recover the funds, once initiated, the actions can have serious consequences. For example, a portion of the borrower's Social Security disability, retirement, or survivor benefits can be claimed to pay off the loan. From 2002 through 2013, the number of individuals whose Social Security benefits were offset to pay student loan debt increased about five-fold from about 31,000 to 155,000. Among those 65 and older, the number of individuals whose benefits were offset grew from about 6,000 to about 36,000 over the same period, roughly a 500 percent increase. In 1998, additional limits on the amount that monthly benefits can be offset were implemented, but since that time the value of the amount protected and retained by the borrower has fallen below the poverty threshold. GAO is not making recommendations. GAO received technical comments on a draft of this testimony from the Department of Education, the Department of the Treasury, and the Federal Reserve System. GAO incorporated these comments into the testimony as appropriate. |
Home care workers support consumers of home care services, typically individuals with disabilities and older adults, with their personal care needs. Some of the activities that home care workers perform include helping with activities of daily living (ADLs) such as dressing, grooming, eating, or bathing, as well as instrumental activities of daily living (IADLs) that enable a person to live independently such as meal preparation, driving, light housework, managing finances, and assisting with medications. Home care services may be provided by one or more worker(s); however, given the personal nature of the work, experts have noted the benefits of maintaining continuity among home care workers, and consumers often prefer to receive care from a limited number of workers. Home care needs depend on many factors including each consumer’s functional limitations and the availability of informal supports, such as those provided by family members, so the amount of time a home care worker provides care can vary. For example, home care workers may provide a few hours of home care per week or up to 24 hours per day depending on an individual’s needs. Home care workers may be employed directly by the consumer or by a third party home health care agency that matches workers with consumers. Examples of traditional types of home health care companies include: for-profit home care agencies, voluntary non-profits, and private not-for-profit home care agencies. A variety of public and private sources pay for home care services. The majority of home care is paid for by public sources, such as Medicaid. Medicaid is a federal-state program that provides health care services to certain low-income populations. Individuals who qualify for Medicaid and receive coverage for home care services include individuals aged 65 or older and individuals who are disabled or blind. Although Medicaid is jointly financed by the states and the federal government, it is directly administered by the states, with oversight from the Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS). State Medicaid spending for most services is matched by the federal government at a rate that is based in part on each state’s per capita income according to a formula established by law. State Medicaid programs cover home care services through a wide and complex range of options within Medicaid, which include providing this coverage as an alternative to institutional care. The Medicaid program requires states to cover certain home health services, and states may also elect to cover additional home and community-based services under their Medicaid programs, or through special waivers that allow them added flexibility in covering these services. CMS has been working in partnership with states, consumers, providers, and other stakeholders to create a sustainable, person-driven long-term support system. According to CMS, the system aims to allow people with disabilities and chronic conditions to exercise choice and control, and to access quality services that assure optimal outcomes, such as independence and quality of life. States have an obligation to provide Medicaid services to eligible individuals with disabilities in the most integrated setting appropriate to their needs, consistent with title II, part A of the Americans with Disabilities Act, which is overseen by the Department of Justice. The FLSA is the primary federal statute that establishes standards for minimum wage, overtime pay, and child labor. The FLSA requires that workers who are covered by the act and not specifically exempt from its provisions be paid at least the federal minimum wage (currently $7.25 per hour) and 1.5 times their regular rate of pay for hours worked over 40 in a workweek. The 1974 amendments to the FLSA extended coverage to workers employed in “domestic service” but established an exemption from the minimum wage and overtime provisions for individuals providing “companionship services” to older adults or people with disabilities. The amendments also created a more limited exemption from the overtime pay requirements for domestic service employees who reside in the household where they work (live-in domestic service workers). In 1975, DOL, the federal agency responsible for overseeing and enforcing the FLSA, issued the existing regulations which implemented these provisions and, among other things, defined “companionship services.” These regulations define companionship services as those which provide ‘‘fellowship, care, and protection’’ to an elderly person or individual with a disability, and include household activities related to the care of that person such as preparing a meal, making the bed, and washing clothes, for example. Additionally, these regulations permit third party employers, such as home care services agencies, to claim the companionship services exemption for workers (which we refer to in this report as the companionship exemption). In October 2013, DOL issued a final rule, commonly referred to as the Home Care Rule, revising its existing regulations on domestic services employment and the companionship services exemption. The Home Care Rule, scheduled to go into effect January 2015, makes three main changes to the existing regulations: it updates terminology and narrows the definition of companionship services; it limits who may claim the companionship services and live-in domestic services exemptions by stipulating that third party employers, such as private home care agencies, will no longer be able to claim these exemptions; and it changes the record-keeping requirements for employers of live-in domestic services workers. As a result of these changes, more home care workers will be entitled to protections under the FLSA, which may include the right to time and one- half of their regular hourly wage when they work more than 40 hours in a week; compensation for time spent traveling between clients’ homes; and compensation when they wake to care for clients on overnight shifts. According to DOL, the workers who will be directly affected by the change to the companionship services exemption are predominantly women in their mid-40s or older and minorities who have a high school diploma or less education. The existing regulations define companionship services as “fellowship, care, and protection,” and the revised definition of companionship services includes “fellowship” and “protection” but limits the amount of time that a worker can spend on the provision of “care” (see fig.1). Under the Home Care Rule, examples of “fellowship” and “protection” include activities such as engaging in conversation, reading, accompanying the person on walks or to appointments, and being present to monitor the person’s safety and well-being. “Care” is defined to include assisting with activities of daily living and instrumental activities of daily living such as dressing, feeding, meal preparation, and light housework – precisely the types of activities that many home care workers engage in today. Under the existing regulations, which apply until the Home Care Rule goes into effect, workers may spend an unlimited amount of time providing these types of services and still be exempt from the FLSA minimum wage and overtime provisions. However, in order to qualify for the companionship services exemption under the revised regulation, the amount of time a worker spends on these types of activities may not exceed 20 percent of the total hours worked per person and per workweek. In the revised Home Care Rule, DOL also limited who may claim the companionship services exemption. Under the existing regulations, third party employers may claim the companionship services exemption and are not required to pay home care workers who qualify for minimum wage and overtime. However, under the revised regulation, third party employers, such as private home care agencies, will no longer be able to claim the companionship services exemption from minimum wage and overtime.claim the exemption if the home care worker primarily provides fellowship and protection and spends 20 percent or less of his or her weekly work hours per care recipient on activities of daily living (ADLs) and instrumental activities of daily living (IADLs) and if the worker meets certain other requirements (see fig. 2). When DOL was developing the Home Care Rule, it considered the growth in the home care industry and the resulting changes in the home care workforce. In March 2012, Congress held a hearing to examine the proposed rule and the possible effects of the narrowed definition of companionship services. According to testimony from a DOL official, the home care industry has “undergone a dramatic transformation” –due in part to increased demand for home care—since DOL issued its regulations on the companionship and live-in exemptions in 1975. For example, the official stated that the number of certified home health care agencies had increased from 2,242 in 1975 to more than 10,000 at the end of 2009. The DOL official stated that the demand for home care has increased as a result of the growth in the aging population, the rising costs of institutional care, and the availability of funding assistance from federally supported programs, such as Medicaid. Similarly, the number of home care workers has increased. The Bureau of Labor Statistics (BLS) has reported that the total number of home care workers has more than doubled in each of the last two decades–with nearly 2.1 million workers in 2012–and is expected to be among the fastest growing occupations in the coming years. DOL officials also told us that as the industry has grown, home care workers’ duties have become more specialized. Home care workers are assisting consumers with many more services than they had been when the exemptions were enacted. According to DOL officials, home care workers are increasingly providing specialized care, such as assistance with activities of daily living and limited medical-related care— services that were previously provided in nursing homes or other professional settings by trained nurses. With the expected continued growth in the demand for home care, DOL officials also told us that in developing the proposed rule, they considered how to improve worker wages and address high worker turnover. DOL officials have noted that the growth in the industry and worker responsibilities has not translated to growth in home care workers’ wages, which are among the lowest in the country. Additionally, home care worker turnover has been a concern and one national group describes the characteristics of the home care workforce to have “chronically high rates of workforce instability.” DOL officials said the Home Care Rule is an important step in ensuring that the home care industry attracts and retains qualified workers, which the industry will need in the future. The basic process by which federal agencies typically develop and issue regulations is set forth in the Administrative Procedure Act, as well as certain other statutes and Executive Orders. Federal rulemaking procedures generally include issuing a Notice of Proposed Rulemaking (NPRM) and providing an opportunity for public comment before issuing a final rule, and may also include conducting a cost-benefit analysis, among other things. During the rulemaking process for the Home Care Rule, DOL officials said that in addition to considering the thousands of public comments received in response to the proposed rule, the agency also sought input from stakeholders in a variety of forums. Figure 3 highlights some of the outreach that DOL conducted during the rulemaking process. During the comment period, DOL received more than 26,000 public comments from various stakeholders. Comments were submitted by a range of stakeholders, including individuals and organizations representing home care workers, consumers of home care services, public and private home care agencies, the disability community, and state and federal government agencies. According to DOL, the comments reflected a wide variety of views, with most of the comments supporting FLSA protections for home care workers. In the final rule, DOL made some changes from the proposed rule in response to stakeholder comments. Among the changes, DOL: Set the effective date for 15 months after the publication of the final rule. Because of concerns over the amount of time needed for employers and state Medicaid programs to make the necessary adjustments to their programs, the final rule set an extended effective date for the rule of January 1, 2015. Modified the proposed recordkeeping requirements for live-in home care workers to allow employers to require their workers to record and submit their hours worked. Under the proposed rule, the employer could not require the live-in home care worker to record his or her hours worked; rather, the employer would be responsible for making, keeping, and preserving these records. However, DOL received comments that such a prohibition would be difficult for individual consumers who also serve as employers, particularly for those who have Alzheimer’s disease, dementia, or developmental disabilities. Clarified the description of services that qualify as “care” and which are subject to the 20 percent limit. As previously noted, one central change made by the Home Care Rule is that the companionship services exemption will apply only when the amount of time a worker spends on certain activities–including activities of daily living and instrumental activities of daily living–does not exceed 20 percent of the total hours worked per workweek per consumer. The description of services that are subject to this limit was changed in the final rule from the description in the proposed rule. referred to these services as “intimate personal care services that are incidental to the provision of fellowship and protection.” While DOL retained the fundamental purpose of the description, to make it easier for the regulated community to understand, the final rule defines the provision of care as assistance with activities of daily living (ADLs) and instrumental activities of daily living (IADLs). Updated its estimate of the overall economic impact from a net cost of $4.7 million per year in the proposed rule to a net benefit in the final rule of between $3.9 and $27.3 million per year on average over 10 years. DOL officials told us that developing estimates of the number of home care workers was one of the greatest challenges they faced when estimating economic impacts. However, they said that they believe that their assumptions related to their earlier estimate resulted in an overestimation of the number of workers. In general, limited data are available on the number of home care workers or the amount of services provided. As a result, information is limited on the characteristics of the home care workforce, the terms under which they are employed, the wage rates they currently earn, or the hours they currently work. The updated estimate of economic impact reflects changes to DOL’s assumptions including, among others: (1) how employers might respond to overtime requirements; (2) the number of current home care workers without overtime coverage; (3) the costs associated with hiring new workers; and (4) an estimate of the benefits of reduced worker turnover resulting from workers receiving increased wages through travel reimbursement and overtime compensation. In the final rule, DOL also responded to some commenters’ questions about how existing FLSA principles—those not changed by the Home Care Rule—will likely apply to home care workers. Many of these questions were raised because when the Home Care Rule becomes effective, the FLSA will apply to many home care arrangements that were not originally structured with FLSA requirements in mind. For example, many stakeholders raised questions about how to calculate work hours for live-in home care workers who spend some of their time sleeping, traveling, eating, or engaging in personal pursuits. DOL also responded to comments in the final rule related to applying FLSA principles to various home care structures, including those with shared living arrangements and those where more than one entity could be considered a worker’s employer under the FLSA (known as joint employment). DOL provided additional information and example-based scenarios about how the existing FLSA principles for shared living and joint employment might apply, noting that actual determinations depend on the unique facts specific to each individual situation. The Home Care Rule is expected to extend FLSA overtime requirements to more home care workers and, as a result of this change, representatives of almost all of the 14 national organizations we interviewed agreed that employers will likely manage workers’ hours more closely. Since third party employers such as private home care agencies will no longer be able to claim the companionship services or live-in exemption under the Home Care Rule, more employers will be required to pay workers an overtime premium when hours worked exceed 40 hours per week. While almost all (13 out of 14) of the national organizations we interviewed agree that employers will actively manage overtime hours, they do not always agree on whether employers will experience a noticeable difference compared with their current operating procedures. Representatives in 5 of the 14 national organizations we interviewed explained how they think employers might react to applying the FLSA overtime requirements to home care workers and how those decisions could adversely affect business costs and services, including for small businesses. One possibility these representatives mentioned is that employers may avoid increased costs associated with overtime pay by limiting workers’ hours. However, a couple of stakeholders noted that these employers would have some increased costs associated with hiring and training new workers to continue providing the same level of services to consumers. Representatives of one organization told us that small businesses, in particular, would not have the capacity to pay overtime wages and anticipate increased costs to adjust schedules for their current workers and to hire new workers. Stakeholders also discussed how the Home Care Rule could potentially decrease business for some employers. For example, representatives from one private home care agency we interviewed said they plan to pass along any additional costs incurred as a result of the Home Care Rule to consumers who pay for their care privately, which could potentially cause clients to leave their agency and seek care elsewhere. However, representatives from four national organizations who predict that the Home Care Rule will help reduce worker turnover also believe that the transition costs associated with extending FLSA coverage to home care workers should be moderate and manageable or that the Home Care Rule could potentially lead to cost savings for employers in the long-run due to reduced worker turnover. In order to estimate the economic impact of the Home Care Rule, DOL used several assumptions about how employers might choose to comply with the rule. For example, DOL estimated how much overtime employers might choose to pay workers and what the costs associated with hiring new workers would be. According to DOL, the effects of the rule will depend on what actions employers take and the costs and benefits of the rule will vary depending upon whether employers choose to continue current practices, rearrange worker schedules, or hire new workers. Because overtime compensation, hiring costs, and reduction in turnover depend on how employers choose to comply with the rule, DOL estimated a range of impacts. Several national organizations representing the interests of workers identified potential effects on workers including increased pay and opportunities, especially for part-time workers. For example, representatives from four of these national organizations said the Home Care Rule may help create more full-time employment opportunities for part-time workers, and that if employers respond to the rule by actively managing workers’ hours in order to avoid paying overtime, then part-time workers might benefit from a redistribution of work hours (see fig. 4). These stakeholders believe that logging many hours of overtime may not be in a worker’s best interest and can lead to low morale or increase the risk of workplace injury. In addition, some workers’ wages may increase because they may be entitled to overtime pay and compensation for travel time when the Home Care Rule goes into effect. For example, DOL’s economic analysis estimates that 12 percent of home care workers currently work more than 40 hours per week, and whether some of these workers’ wages will go up will depend on how much overtime employers choose to pay workers. On the other hand, representatives in 4 of the 14 national organizations we interviewed emphasized how workers who are employed for more than 40 hours in a workweek may be at a disadvantage once the Home Care Rule goes into effect (see fig. 4). The representatives said these workers could see their hours and wages reduced by employers seeking to avoid paying overtime wages. At least two national organizations predict that in certain situations, workers will have to work for multiple agencies in order to maintain their current level of income. Extending minimum wage and overtime protections to home care workers may help expand the workforce and reduce turnover at a time when the demand for home care services is expected to increase, according to representatives in 8 of the 14 national organizations we interviewed, including worker and consumer advocacy organizations. These stakeholders often pointed to low wages and poor benefits as an impediment to recruiting and retaining qualified home care workers and the cause of high turnover. On the other hand, five national organizations, including disability advocacy groups and organizations representing employers, predict that if some workers’ hours are reduced, the Home Care Rule could lead to higher turnover among workers and cause them to seek out other jobs, which could result in a labor shortage. Extending FLSA coverage to home care workers could also have effects on consumers. For example, representatives in six national organizations we interviewed said the Home Care Rule could result in better quality of care for consumers by reducing turnover among workers. However, representatives in eight national organizations we interviewed expressed concern over how the implementation of the overtime requirements might potentially affect the continuity of care for some consumers. Employers and consumers seeking to minimize overtime payments might need to hire additional workers to provide care. Representatives in one national organization said because of the nature of the work, it is not unusual for consumers and home care workers to forge close relationships, and according to a personal attendant coalition we interviewed, introducing additional workers may disrupt the routines and continuity of care for consumers who need substantial care. One home care worker we interviewed said that continuity of care is important for developing trust and that the particular consumer she works with might be wary of introducing new workers into her home. Another home care worker said it takes a considerable amount of time to learn the different techniques used to support consumers with disabilities and that the Home Care Rule has the potential to disrupt established relationships between workers and consumers. Several national organizations discussed how the Home Care Rule could potentially affect the affordability of in-home care. Representatives in 8 of the 14 national organizations we interviewed noted specific situations where certain consumers receiving home care could be at risk of being transferred to an institution or having to move to an alternate setting because of possible service cost increases. For example, in anticipation of the Home Care Rule going into effect and increased costs, representatives from one home care agency we interviewed said some consumers have already decided to hire a neighbor or friend to meet their home care needs or have left their homes to move into a congregate setting. Effects on Medicaid-funded home care services will likely vary by state since states have some flexibility in designing their Medicaid Programs and could choose to make adjustments in response to the rule. Many of the potential effects described by officials in the six states we visited are similar to some of those we heard from the national organizations– including concerns about cost and potential disruption of the continuity of care. Most state officials we interviewed were concerned about the potential effects of the Home Care Rule on existing programs that were structured and made affordable by means of the companionship or live-in exemptions. How the FLSA will apply in each case will depend in part on how services are delivered and the specific facts and circumstances of each situation. Among the potential effects we heard in the six states we visited were: Consumer-directed programs—Consumers have the option to direct their own care in certain Medicaid-funded programs. Consumer- directed programs under Medicaid are designed to optimize consumers’ autonomy and independence. In these programs, consumers may exercise their own decision-making authority in areas such as recruiting, hiring, training, and supervising workers. Officials in at least two states we visited expressed concern over how the Home Care Rule might affect the flexibility of these programs or restrict consumer choice in situations where the state may be considered an employer along with the consumer. For example, a consumer may potentially have less decision-making authority over how many workers are needed to provide care if a state exercises some control over workers and decides it is necessary to limit workers’ hours to reduce overtime costs. According to the National Association of Medicaid Directors, maintaining current levels of consumer choice under Medicaid either becomes more expensive in certain situations as a result of the Home Care Rule, or may be sacrificed in some situations in favor of cost control. Family caregivers—Some states with consumer-directed Medicaid programs allow consumers to hire family members to provide their care. According to the National Association of Medicaid Directors, paid family caregivers have become a common solution to the shortage of traditional home care workers. Some states may consider limiting the number of hours workers, including family members, can work in order to minimize overtime costs once the Home Care Rule goes into effect, which could potentially reduce the available workforce. In addition, some family caregivers may rely on the income they receive by providing care. In two states we visited, state officials specifically expressed concern over situations where family members’ work hours may be reduced in response to the Home Care Rule in order to minimize overtime payments, which could reduce the family’s income. In another state, officials were less concerned about the Home Care Rule’s potential effect on family caregivers because one of their Medicaid home care programs includes certain restrictions on hiring a family member and limits the number of hours a family worker can provide care. Though DOL guidance describes the parameters of an employment relationship versus a familial relationship for the purposes of the Home Care Rule, officials from one national organization told us that it may be a challenge for some consumers and their family caregivers to determine when FLSA principles apply. According to DOL guidance, family workers may be entitled to FLSA minimum wage and overtime protections depending on the specific circumstances of the work arrangement. What are family caregivers? Paid family caregivers are typically not career home care workers; rather they are usually close family members and friends willing to help the consumer. DOL guidance states that in situations where family members are paid care providers, there is both a familial and an employment relationship, and only hours worked within the scope of the employment relationship are covered by the FLSA. In these circumstances, the employment relationship is usually limited by a “plan of care” or other written agreement approved by the program funding the services. For example, a familial relationship rather than an employment relationship would exist for a father who has an adult son with a physical disability and helps his son with eating dinner and bathing in the evenings. If the son enrolls in a Medicaid-funded program and the father becomes his paid care provider under a program-approved plan of care that funds 8 hours of services per day, then the father would also be in an employment relationship with his son for purposes of the FLSA for those 8 hours. respite care—a break from caregiving responsibilities—to family caregivers. In one state we visited, officials expressed concern that FLSA principles would apply to respite workers. For example, officials in this state said a worker may easily incur overtime in overnight respite situations, which could potentially strain a consumer’s budget for care. Introducing additional workers to provide respite in order to avoid overtime costs may not always be practical. For example, one state official said it may be very disruptive for some consumers, such as those with intellectual and developmental disabilities who do not handle changes well, to have different caregivers in a respite situation over the course of a weekend. Live-in arrangements—Under some Medicaid-funded programs, home care workers and consumers may live together. Officials in two of the states we visited told us they were concerned about how FLSA requirements may affect live-in care arrangements for those consumers who require substantial care and how it could be costly to comply with the FLSA in certain live-in situations. Many of these live- in arrangements were designed without consideration of FLSA overtime requirements and were made affordable by reliance on the companionship and live-in exemptions. State officials in one state we visited said they may redesign one of their live-in programs in response to the Home Care Rule to help keep overtime costs down. However, they said it would require time to reevaluate and possibly rewrite parts of existing program guidance. State budgets—Officials in all of the six states we visited said the Home Care Rule has the potential to strain limited state budgets. For example, officials in three states said that Medicaid rates may not be sufficient to cover additional costs incurred as a result of the rule, including compensable travel time between Medicaid beneficiaries. According to state officials in three of the states we visited, their state legislatures had not yet budgeted additional funding that may be needed to comply with the Home Care Rule once it goes into effect. States may have to make systems changes to track overtime among workers, for example, which could be costly and would require time to implement. States are also facing their own unique budgetary challenges. For example, officials from at least two states we visited mentioned wages increasing, which could also pose a financial strain and mean that implementing the Home Care Rule’s overtime requirements will cost more. In one state we visited, the minimum wage is set to increase from $8.00 to $9.00 by December 31, 2015. Capacity—In addition to implementing the Home Care Rule, state officials we interviewed expressed concern over resources and having the capacity to implement and comply with other recent federal requirements. For example, officials in three states said that the Patient Protection and Affordable Care Act’s penalties for employers failing to offer full-time employees affordable health insurance are scheduled to go into effect at roughly the same time as the Home Care Rule. CMS also recently issued regulations on home and community-based services and some state officials said that they have been focusing their attention on complying with these regulations. After the Home Care Rule was published, DOL developed guidance, conducted outreach, and provided technical assistance to help state Medicaid programs and other stakeholders plan for implementation. For example, to help workers and consumers determine if federal minimum wage and overtime pay are required, DOL has made available on its website several self-assessment tools which ask a series of questions about the nature of the work, the employer, and the services performed. In addition, DOL has answered various Frequently Asked Questions (FAQs) and developed fact sheets to explain how the Home Care Rule will apply. For example, DOL answers questions on its website about how to apply FLSA to the hours a worker spends sleeping, as is the case when a home care worker lives with a consumer. DOL has also published several fact sheets on specific topics on its website to provide stakeholders with compliance assistance, such as how the FLSA will apply to paid family caregivers once the Home Care Rule goes into effect. DOL officials said they have trained staff in the national and regional offices in order to expand their capacity for compliance assistance, such as providing technical assistance. In its 2014-2018 Strategic Plan, DOL states that one of its goals is to improve compliance with federal wage and overtime laws by providing compliance assistance and conducting outreach with stakeholders. After the final rule was issued, DOL collaborated with federal agencies and conducted outreach with stakeholders to help them better understand the implications of the rule on state Medicaid programs and develop applicable guidance (see fig. 5). Since many Medicaid programs providing home care services were created with the expectation that the workers would be exempt from the FLSA, DOL initiated an interagency workgroup comprised of individuals from DOL, CMS, and DOJ to understand the variation among different state Medicaid programs and the laws that protect people with disabilities. DOL identified two areas in which Medicaid programs could use additional guidance and developed written guidance, sponsored webinars, and conducted outreach with all 50 states to assist with compliance. In addition, CMS also issued guidance to assist states in understanding options for consumer-directed programs in implementing the changes made by the Home Care Rule. The two major guidance documents DOL issued pertain to: Shared living arrangements—In response to the proposed rule, DOL received many comments from stakeholders requesting that it clarify how FLSA wage and hour requirements apply to certain shared living arrangements funded under Medicaid. DOL consulted with stakeholders to learn about the different types of shared living models used by states and the differences among them. In March 2014, DOL issued an Administrator’s Interpretation, which provided guidance on how FLSA principles apply to shared living arrangements.representative from one national organization we interviewed said that it was collaborating with another national organization to develop a reader-friendly guide on shared living to further help state Medicaid agencies and other home care providers implement FLSA requirements. The design of states’ shared living programs can vary greatly, as can the specific facts and circumstances around these living situations. For example, in one state we visited, a state official said that most shared living arrangements in the state consist of home care providers opening up their homes to consumers. In another state we visited, home care workers typically move into the home of the consumer. The shared living guidance provides detailed examples about different types of live-in arrangements including: (1) those in which a consumer lives in a home care worker’s house, (2) those in which a home care worker lives in a consumer’s house, and (3) those in which the two move into a new house together (see fig. 6). The guidance on shared living also explains that FLSA requirements may or may not apply to home care workers in these arrangements and will depend, in part, on whether the home care worker would be considered an employee or an independent contractor. To determine this, DOL uses the “economic realities” test, which reviews a series of factors. Some of the factors reviewed in determining whether a worker is an employee or an independent contractor include who sets wages and work hours and who determines how the work is performed. The guidance also provides various examples of shared living arrangements and what the result would be under the economic realities test. According to the guidance, in most situations where a consumer moves into the home care worker’s existing home, the worker will not be considered an employee of the consumer under the FLSA. This is primarily because the home care worker has a greater degree of control over the work and investment in the arrangement, including maintaining and modifying the residence. On the other hand, the guidance says that when a worker moves into the home of a consumer, the worker will typically be considered an employee of the consumer under the FLSA, because the consumer has invested in and controls the residence, and likely sets the schedule and directs how and when tasks are to be performed. Several of the national organizations and states we interviewed appreciated DOL’s efforts to explain how these different types of Medicaid services are delivered and the specific examples provided in the shared living guidance. Based on the guidance, officials in two states we visited said that they do not anticipate having to make changes because they believe the FLSA does not apply to them based on the way their shared living programs are structured. Joint employment—DOL reached out to stakeholders to understand how different Medicaid programs are structured to develop guidance on the application of FLSA joint employment principles and to help states implement the Home Care Rule. Under the FLSA, a single worker may be an employee of two or more employers at the same time. For example, in the context of home care, a consumer and a third party such as a state Medicaid agency or a private home care agency may jointly employ a worker. Because the FLSA will apply to many more home care workers once the Home Care Rule goes into effect, in June 2014 DOL issued an Administrator’s Interpretation providing guidance on joint employment. This guidance could help stakeholders understand situations in which there is a third party joint employer for purposes of FLSA, such as in certain Medicaid-funded consumer-directed programs. According to DOL, most, but not all, consumer-directed programs will have a third party joint employer such as a private agency, non-profit organization, or public entity in addition to the consumer. States have been reexamining their roles and responsibilities under Medicaid in order to determine if they are third party joint employers in certain situations and therefore responsible for meeting the FLSA’s minimum wage and overtime requirements, according to an official at the National Association of Medicaid Directors. States are particularly concerned about situations in which they may be a joint employer to home care workers who provide care for multiple consumers because such workers may accrue overtime. For example, in one state we visited, a home care worker said she works for three different consumers under the state’s consumer-directed programs where the consumer or a family member is considered the employer. She does not work for any one consumer for more than 40 hours per week. If the state is a joint employer for these programs, it may be responsible for compensating the worker with overtime wages any time she works more than 40 hours per week cumulatively across any of the consumers (see fig. 7). Certain states have already determined that they are joint employers. Prior to issuing the Administrator’s Interpretation on joint employment, DOL responded to specific inquiries from states about the application of FLSA to specific scenarios. For example, an Oregon state official asked DOL to provide an official opinion on whether the state would be considered a joint employer of home care workers who provided services under their Oregon-sponsored consumer-directed Medicaid program. DOL applied the “economic realities” test used by the courts, using information provided by the state and determined that the state would be considered a joint employer in that program. One state we visited, California, has already determined based on prior experience that it would be considered a joint employer under the FLSA in certain Medicaid-funded situations and would therefore be responsible for paying eligible workers overtime under the new rule. For one California program, the state considered limiting workers’ hours to 40 hours per week to help control costs, but ultimately budgeted more than $170 million to pay the approximately 50,000 workers in that program who work up to a certain amount of overtime, with that cost expected to increase. However, five of the six states we visited were still in the process of trying to determine how the FLSA joint employment principles apply to their specific Medicaid programs and whether they will have to make any changes to their programs to keep the cost of home care services affordable. Two of these states also said they need additional time to understand the joint employment guidance in order to determine how their existing Medicaid programs may be affected by the FLSA framework and what steps they may need to take to comply. Officials in these two states also said they are trying to figure out if any of their programmatic changes would require CMS approval. CMS officials said the approval process could take from 90 to 180 days, depending on the type of change. While DOL has several enforcement mechanisms in place to oversee the Home Care Rule’s implementation once it goes into effect, DOL officials said they are currently focusing their efforts on technical assistance to help employers and states with implementing the rule. For example, in October 2014, DOL announced that it will not bring enforcement actions against any employer for violations of FLSA resulting from the Home Care Rule through June 2015. DOL stated that it had received requests from national organizations representing state Medicaid programs and disability advocates and two states to extend the effective date of the Home Care Rule. According to DOL, these entities requested additional time to make programmatic, budgetary, and operational adjustments to state Medicaid programs. DOL also said it received requests from national organizations and worker advocates to implement the Home Care Rule on the published date of January 1, 2015 so that workers could be protected by the FLSA without delay. While DOL did not extend the effective date of the Home Care Rule, the agency recognized that the implementation of the Home Care Rule raised sensitive issues and it has been working with employers to minimize the effects on consumers who rely on home care. For these reasons, DOL stated that during the six month period of non-enforcement, it will concentrate its resources on providing intensive technical assistance to the community, particularly state agencies administering home care programs. In the notice, DOL stated that from July 1, 2015 until December 31, 2015, it will make determinations on a case-by-case basis as to whether to bring enforcement actions in the home care context and will give strong consideration to an employer’s efforts to meet the requirements under the FLSA. During this time period, DOL plans to continue extensive outreach and technical assistance. While DOL has adopted this phased-in enforcement strategy to promote compliance, the effects of the Home Care Rule still remain uncertain. In 2011, the President signed Executive Order 13563, entitled Improving Regulation and Regulatory Review. The order states that our regulatory system “must promote predictability and reduce uncertainty.” The order directed each federal executive agency to develop a plan for conducting retrospective reviews of existing significant regulations, to determine whether they should be modified, streamlined, expanded, or repealed to make the agency’s regulatory program more effective or less burdensome. We previously reported that retrospective reviews are useful because regulations can change the behavior of regulated entities and the public in ways that cannot be predicted prior to implementation. In addition, these reviews may result in various outcomes such as changes to regulations, changes or additions to guidance, decisions to conduct additional studies, or validation that existing rules were working as planned. We have also observed that it is important for agencies to consider in advance how they will evaluate their regulations, to better position them to conduct future retrospective analyses. DOL officials told us it would be premature to think about the effects of the rule since they are developing further implementation plans and focusing on providing technical assistance. DOL officials also said they had not made any plans for evaluating the Home Care Rule and that it is too early to design such a study. In 2007, we recommended that during the promulgation of certain new rules, agencies consider how and when they will measure the performance of the regulation, including how and when they will collect, analyze, and report the data needed to conduct a retrospective review. Consistent with that recommendation, guidance from the Office of Management and Budget also states that regulations should be designed and written in ways that facilitate evaluation of their consequences and thus promote retrospective analyses. Home care agencies and state Medicaid programs are responding to the Home Care Rule through programmatic and policy changes, and while more home care workers will be entitled to receive federal minimum wage and overtime pay protections as a result of the Home Care Rule, the effects of the rule will remain unclear until after it is implemented. The effects on workers and consumers will largely depend on how employers respond to overtime requirements and the changes that are made to state Medicaid programs. Given that many state Medicaid programs have relied on the companionship services exemption and emphasize providing care at home, state Medicaid officials continue to be concerned about maintaining services for consumers in their homes as opposed to institutions. DOL’s recent announcement regarding the phase-in of its enforcement, coupled with its substantial outreach to stakeholders and revised estimate of the economic impact of the rule, highlight the challenges and complexity of implementing the rule. The future of the Home Care Rule will depend on the outcome of pending litigation and DOL’s subsequent decisions about implementation. Should the rule go into effect as scheduled, once it is fully implemented, DOL will have an opportunity to examine some of the broader implications of the Home Care Rule such as whether the workforce grows as a result to meet any increased demand for home care and whether there is any resulting increase in the use of institutional care. However, in the absence of a plan to evaluate the effects of the rule, DOL may be unprepared to collect the necessary data to take advantage of this opportunity. Given the uncertainty of the Home Care Rule’s ultimate impact, planning for a retrospective review is important because regulations can change behaviors and result in consequences that cannot be predicted prior to implementation. Depending on the outcome of the litigation, the Secretary of Labor should take steps to ensure the agency will be positioned to conduct a meaningful retrospective review consistent with the Executive Order at an appropriate time. These steps should be taken in consultation with the Centers for Medicare & Medicaid Services, and could include, for example, identifying metrics that could be used to evaluate the rule, and implementing a plan to gather and analyze the necessary data. We provided a draft of this report to DOL for review and comment. DOL’s Wage and Hour Division (WHD) provided written comments which are reproduced in appendix I. WHD agreed with our recommendation that the agency position itself to conduct a meaningful retrospective review at an appropriate time. Moving forward, WHD said it is working to develop data collection plans and to explore a potential evaluation that is focused on the Home Care Rule. As part of this effort, WHD noted that DOL will continue to work with HHS and other federal partners. DOL also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees, the Secretary of Labor, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, Blake Ainsworth (Assistant Director), Anjali Tekchandani (Analyst-in-Charge), Robert Campbell, Sara Edmondson, and Meredith Moore made significant contributions to all phases of the work. Also contributing to this report were James Bennett, Kenneth Bombara, Catina Bradley, Sarah Cornetto, Patricia Donahue, Alexandra Edwards, Katherine Iritani, Kathy Leslie, Sheila McCoy, Linda McIver, Jean McSween, Clarita Mrena, Mimi Nguyen, and Kathleen van Gelder. | Older adults and people with disabilities are increasingly receiving care at home, and home care workers are performing increasingly skilled duties. DOL recently revised its FLSA regulations to extend minimum wage and overtime protections to more of those home care workers. The Home Care Rule, scheduled to go into effect January 2015, may affect a diverse set of stakeholders, including home care workers, consumers receiving home care services, private home care agencies, and state Medicaid programs. GAO was asked to assess the potential effects of this rule. GAO examined (1) changes DOL made in the Home Care Rule and factors it considered during the rulemaking process, (2) the potential effects of the rule identified by key stakeholders, and (3) steps DOL has taken to help state Medicaid agencies and other stakeholders understand and comply with the Home Care Rule. GAO visited six state Medicaid programs selected in part for variation in state Medicaid program design; reviewed relevant federal regulations; and interviewed government officials and representatives from 14 national organizations representing the spectrum of home care stakeholders, including workers and consumers. The Department of Labor's (DOL) Home Care Rule is expected to increase the number of home care workers who qualify for minimum wage and overtime protections under the Fair Labor Standards Act of 1938, as amended (FLSA). The Home Care Rule narrows the definition of companionship services and limits who may claim the companionship services exemption, among other changes. It is scheduled to go into effect in January 2015, although a challenge to the rule is currently pending in federal court. When developing the rule, DOL considered several factors, including the growth and specialization of the home care workforce, as well as the amount of time needed to make adjustments. Representatives from national organizations GAO interviewed identified potential effects of the Home Care Rule on jobs and earnings, employer costs, and care, but did not always agree. Some representatives said extending FLSA protections to home care workers will create more full-time employment opportunities for part-time workers, while others said those who work more than 40 hours in a workweek may see reduced hours and earnings. Some representatives said employers may face increased business costs to pay overtime and some said that certain consumers could be placed in institutions because of possible service cost increases. Effects on Medicaid home care services will vary by state. Officials in five of six states GAO visited explained that they were still assessing possible changes to their programs, while one state had determined what changes it would make to comply with the new rule. After the Home Care Rule was published, DOL collaborated with other federal agencies and stakeholders to develop guidance, conduct outreach, and provide technical assistance to help stakeholders plan for implementation. For example, DOL worked with the Centers for Medicare & Medicaid Services to develop guidance on applying FLSA principles to different home care living arrangements commonly funded by Medicaid. DOL officials said they are focusing on technical assistance to help employers and states with implementation and have developed a phased-in enforcement strategy. The effects of the Home Care Rule, such as whether the workforce will grow or the use of institutional care will increase, remain uncertain, and DOL officials said they do not currently have any plans to evaluate the rule. Depending on the outcome of the litigation, GAO recommends that the Secretary of Labor take steps to ensure the agency will be positioned to conduct a meaningful retrospective review of the rule at an appropriate time. DOL agreed with this recommendation and is working on developing data collection plans. |
The SBInet program is responsible for identifying and deploying an appropriate mix of technology (e.g., sensors, cameras, radars, communications systems, and mounted laptop computers for agent vehicles), tactical infrastructure (e.g., fencing, vehicle barriers, roads,), rapid response capability (e.g., ability to quickly relocate operational assets and personnel) and personnel (e.g., program staff and Border Patrol agents) that will enable CBP agents and officers to gain effective control of U.S. borders. SBInet technology is also intended to include the development and deployment of a common operating picture (COP) that provides uniform data through a command center environment to Border Patrol agents in the field and all DHS agencies and to be interoperable with stakeholders external to DHS, such as local law enforcement. The initial focus of SBInet is on the southwest border areas between ports of entry that CBP has designated as having the highest need for enhanced border security because of serious vulnerabilities. Through SBInet, CBP plans to complete a minimum of 387 miles of technology deployment across the southwest border by December 31, 2008. Figure 1 shows the location of select SBInet projects underway on the southwest border. In September 2006, CBP awarded a prime contract to the Boeing Company for 3 years, with three additional 1-year options. As the prime contractor, Boeing is responsible for acquiring, deploying, and sustaining selected SBInet technology and tactical infrastructure projects. In this way, Boeing has extensive involvement in the SBInet program requirements development, design, production, integration, testing, and maintenance and support of SBInet projects. Moreover, Boeing is responsible for selecting and managing a team of subcontractors that provide individual components for Boeing to integrate into the SBInet system. The SBInet contract is largely performance-based—that is, CBP has set requirements for SBInet and Boeing and CBP coordinate and collaborate to develop solutions to meet these requirements—and designed to maximize the use of commercial off-the-shelf technology. CBP’s SBInet PMO oversees and manages the Boeing-led SBInet contractor team. The SBInet PMO workforce includes a mix of government and contractor support staff. The SBInet PMO reports to the CBP SBI Program Executive Director. CBP is executing part of SBInet activities through a series of task orders to Boeing for individual projects. As of September 30, 2007, CBP had awarded five task orders to Boeing for SBInet projects. These include task orders for (1) Project 28, Boeing’s pilot project and initial implementation of SBInet technology to achieve control of 28 miles of the border in the Tucson sector; (2) Project 37, for construction approximately 32 miles of vehicle barriers and pedestrian fencing in the Yuma sector along the Barry M. Goldwater Range (BMGR); (3) Program Management, for engineering, facilities and infrastructure, test and evaluation, and general program management services; (4) Fence Lab, a project to evaluate the performance and cost of deploying different types of fences and vehicle barriers; and (5) a design task order for developing the plans for several technology projects to be located in the Tucson, Yuma, and El Paso sectors. In addition to deploying technology across the southwest border, the SBInet PMO plans to deploy 370 miles of single-layer pedestrian fencing and 200 miles of vehicle barriers by December 31, 2008. Whereas pedestrian fencing is designed to prevent people on foot from crossing the border, vehicle barriers are other physical barriers meant to stop the entry of vehicles. The SBInet PMO is utilizing the U.S. Army Corps of Engineers (USACE) to contract for fencing and supporting infrastructure (such as lights and roads), complete required environmental assessments, and acquire necessary real estate. DHS has estimated that the total cost for completing the deployment for the southwest border—the initial focus of SBInet deployment—will be $7.6 billion from fiscal years 2007 through 2011. DHS has not yet reported the estimated life cycle cost for this program, which is the total cost to the government for a program over its full life, consisting of research and development, operations, maintenance, and disposal costs. For fiscal year 2007, Congress appropriated about $1.2 billion for SBInet, about which 40 percent DHS had committed or obligated as of September 30, 2007. For fiscal year 2008, DHS has requested an additional $1 billion. DHS has made some progress to implement Project 28—the first segment of technology on the southwest border, but it has fallen behind its planned schedule. Project 28 is the first opportunity for Boeing to demonstrate that its technology system can meet SBInet performance requirements in a real-life environment. Boeing’s inability thus far to resolve system integration issues has left Project 28 incomplete more than 4 months after its planned June 13 milestone to become operational—at which point, Border Patrol agents were to begin using SBInet technology to support their activities, and CBP was to begin its operational test and evaluation phase. Boeing delivered and deployed the individual technology components of Project 28 on schedule. Nevertheless, CBP and Boeing officials reported that Boeing has been unable to effectively integrate the information collected from several of the newly deployed technology components, such as sensor towers, cameras, radars, and unattended ground sensors. Among several technical problems reported were that it was taking too long for radar information to display in command centers and newly deployed radars were being activated by rain, making the system unusable. In August 2007, CBP officially notified Boeing that it would not accept Project 28 until these and other problems were corrected. In September 2007, CBP officials told us that Boeing was making progress in correcting the system integration problems; however, CBP was unable to provide us with a specific date when Boeing would complete the corrections necessary to make Project 28 operational. See figures 2 and 3 below for photographs of SBInet technology along the southwest border. The SBInet PMO reported that is in the early stages of planning for additional SBInet technology projects along the southwest border; however, Boeing’s delay in completing Project 28 has led the PMO to change the timeline for deploying some of these projects. In August 2007, SBInet PMO officials told us they were revising the SBInet implementation plan to delay interim project milestones for the first phase of SBInet technology projects, scheduled for calendar years 2007 and 2008. For example, SBInet PMO officials said they were delaying the start dates for two projects that were to be modeled on the design used for Project 28 until after Project 28 is operational and can provide lessons learned for planning and deploying additional SBInet technology along the southwest border. According to the SBInet master schedule dated May 31, 2007, these projects were to become operational in December 2007 and May 2008. Despite these delays, SBInet PMO officials said they still expected to complete all of the first phase of technology projects by the end of calendar year 2008. As of October 15, 2007, the SBInet PMO had not provided us with a revised deployment schedule for this first phase. CBP reports that it is taking steps to strengthen its contract management for Project 28. For example, citing numerous milestone slippages by Boeing during Project 28 implementation, in August 2007, CBP sought and reached an agreement with Boeing to give it greater influence in milestone setting and planning corrective actions on the Project 28 task order. While CBP had selected a firm-fixed-price contract to limit cost overruns on Project 28, CBP officials told us that the firm-fixed-price contract CBP used for Project 28 had limited the government’s role in directing Boeing in its decision making process. For example, CBP and contractor officials told us they expressed concern about the timeline for completing Project 28, but CBP chose not to modify the contract because doing so would have made CBP responsible for costs beyond the $20 million fixed-price contract. In mid-August 2007, CBP organized a meeting with Boeing representatives to discuss ways to improve the collaborative process, the submission of milestones, and Boeing’s plan to correct Project 28 problems. Following this meeting, CBP and Boeing initiated a Change Control Board. In mid-September representatives from Boeing’s SBInet team and its subcontractors continued to participate on this board and vote on key issues for solving Project 28 problems. Although CBP participates on this committee as a non-voting member, a senior SBInet official said the government’s experience on the Change Control Board had been positive thus far. For example, the official told us that the Change Control Board had helped improve coordination and integration with Boeing and for suggesting changes to the subcontractor companies working on Project 28. Deploying SBInet’s tactical infrastructure along the southwest border is on schedule, but meeting the SBInet program’s goal to have 370 miles of pedestrian fence and 200 miles of vehicle barriers in place by December 31, 2008, may be challenging and more costly than planned. CBP set an intermediate goal to deploy 70 miles of new pedestrian fencing by the close of fiscal year 2007 and, having deployed 73 miles by this date, achieved its goal. Table 1 summarizes CBP‘s progress and plans for tactical infrastructure deployment. Costs for the 73 miles of fencing constructed in fiscal year 2007 averaged $2.9 million per mile and ranged from $700,000 in San Luis, Arizona, to $4.8 million per mile in Sasabe, Arizona. CBP also deployed 11 miles of vehicle barriers and, although CBP has not yet been able to provide us with the cost of these vehicle barriers, it projects that the average per mile cost for the first 75 miles of barriers it deploys will be $1.5 million. Figure 4 presents examples of fencing deployed. CBP estimates costs for the deployment of fencing in the future will be similar to those thus far. However, according to CBP officials, costs vary due to the type of terrain, materials used, land acquisition, who performs the construction, and the need to meet an expedited schedule. Although CBP estimates that the average cost of remaining fencing will be $2.8 million per mile, actual future costs may be higher due to factors such as the greater cost of commercial labor, higher than expected property acquisition costs, and unforeseen costs associated with working in remote areas. To minimize one of the many factors that add to cost, in the past DHS has used Border Patrol agents and DOD military personnel. However, CBP officials reported that they plan to use commercial labor for future infrastructure projects to meet their deadlines. Of the 73 miles of fencing completed to date, 31 were completed by DOD military personnel and 42 were constructed through commercial contracts. While the non-commercial projects cost an average of $1.2 million per mile, the commercial projects averaged over three times more—$4 million. According to CBP officials, CBP plans to utilize exclusively commercial contracts to complete the remaining 219 miles of fencing. If contract costs for deployment of remaining miles are consistent with those to deploy tactical infrastructure to date and average $4 million per mile, the total contract cost will be $890 million, considerably more than CBP’s initial estimate of $650 million. Although deployment of tactical infrastructure is on schedule, CBP officials reported that meeting deadlines has been challenging because factors they will continue to face include conducting outreach necessary to address border community resistance, devoting time to identify and complete steps necessary to comply with environmental regulations, and addressing difficulties in acquiring rights to border lands. As of July 2007 CBP anticipated community resistance to deployment for 130 of its 370 miles of fencing. According to community leaders, communities resist fencing deployment for reasons including the adverse effect they anticipate it will have on cross-border commerce and community unity. In addition to consuming time, complying with environmental regulations, and acquiring rights to border land can also drive up costs. Although CBP officials state that they are proactively addressing these challenges, these factors will continue to pose a risk to meeting deployment targets. In an effort to identify low cost and easily deployable fencing solutions, CBP funded a project called Fence Lab. CBP plans to try to contain costs by utilizing the results of Fence Lab in the future. Fence Lab tested nine fence/barrier prototypes and evaluated them based on performance criteria such as their ability to disable a vehicle traveling at 40 miles per hour (see fig. 5), allowing animals to migrate through them, and their cost- effectiveness. Based on the results from the lab, SBInet has developed three types of vehicle barriers and one pedestrian fence that meet CBP operational requirements (see fig. 6). The pedestrian fence can be installed onto two of these vehicle barriers to create a hybrid pedestrian fence and vehicle barrier. CBP plans to include these solutions in a “toolkit” of approved fences and barriers, and plans to deploy solutions from this toolkit for all remaining vehicle barriers and for 202 of 225 miles of remaining fencing. Further, CBP officials anticipate that deploying these solutions will reduce costs because cost-effectiveness was a criterion for their inclusion in the toolkit. SBInet officials also told us that widely deploying a select set of vehicle barriers and fences will lower costs through enabling it to make bulk purchases of construction and maintenance materials. While SBInet Program officials expect SBInet to greatly reduce the time spent by CBP enforcement personnel in performing detection activities, a full evaluation of SBInet’s impact on the Border Patrol’s workforce needs has not been completed. The Border Patrol currently uses a mix of resources including personnel, technology, infrastructure, and rapid response capabilities to incrementally achieve its strategic goal of establishing and maintaining operational control of the border. Each year through its Operational Requirements Based Budget Program (ORBBP), the Border Patrol sectors outline the amount of resources needed to achieve a desired level of border control. Border Patrol officials state this annual planning process allows the organization to measure the impact of each type of resource on the required number of Border Patrol agents. A full evaluation of SBInet’s impact on the Border Patrol’s workforce needs is not yet included in the ORBBP process; however, the Border Patrol plans to incorporate information from Project 28 a few months after it is operational. According to agency officials, CBP is on track to meet its hiring goal of 6,000 new Border Patrol agents by December 2008, but after SBInet is deployed, CBP officials expect the number of Border Patrol agents required to meet mission needs to change from current projections, although the direction and magnitude of the change is unknown. In addition, in June 2007, we expressed concern that deploying these new agents to the southwest sectors coupled with the planned transfer of more experienced agents to the northern border will create a disproportionate ratio of new agents to supervisors within those sectors—jeopardizing the supervisors’ availability to acclimate new agents. Tucson Sector officials stated CBP is planning to hire from 650 to 700 supervisors next year. To accommodate the additional agents, the Border Patrol has taken initial steps to provide additional work space through constructing temporary and permanent facilities, at a projected cost of about $550 million from fiscal year 2007 to 2011. The SBInet PMO expects SBInet to support day-to-day border enforcement operations; however, analysis of the impact of SBInet technology on the Border Patrol’s operational procedures cannot be completed at this time because agents have not been able to fully use the system as intended. Leveraging technology is part of the National Border Patrol Strategy which identifies the objectives, tools, and initiatives the Border Patrol uses to maintain operational control of the borders. The Tucson sector, where Project 28 is being deployed, is developing a plan on how to integrate SBInet into its operating procedures. Border Patrol officials stated they intend to re-evaluate this strategy, as SBInet technology is identified and deployed, and as control of the border is achieved. According to agency officials, 22 trainers and 333 operators were trained on the current Project 28 system, but because of deployment delays and changes to the COP software, the SBInet training curriculum is to be revised by Boeing and the government. Training is continuing during this revision process with 24 operators being trained each week. According to CBP officials, Border Patrol agents are receiving “hands on” training during evening and weekend shifts at the COP workstations to familiarize themselves with the recent changes made to the Project 28 system. However, training is to be stopped once a stabilized version of the COP can be used and both trainers and operators are to be retrained using the revised curriculum. Costs associated with revising the training material and retraining the agents are to be covered by Boeing as part of the Project 28 task order; however, the government may incur indirect costs associated with taking agents offline for retraining. The SBI PMO tripled in size in fiscal year 2007 but fell short of its staffing goal of 270 employees. As of September 30, 2007, the SBI PMO had 247 employees onboard, with 113 government employees and 134 contractor support staff. SBI PMO officials also reported that as of October 19, 2007, they had 76 additional staff awaiting background investigations. In addition, these officials said that a Human Capital Management Plan has been drafted, but as of October 22, 2007, the plan had not been approved. In February 2007, we reported that SBInet officials had planned to finalize a human capital strategy that was to include details on staffing and expertise needed for the program. At that time, SBI and SBInet officials expressed concern about difficulties in finding an adequate number of staff with the required expertise to support planned activities about staffing that shortfalls could limit government oversight efforts. Strategic human capital planning is a key component used to define the critical skills and competencies that will be needed to achieve programmatic goals and outlines ways the organization can fill gaps in knowledge, skills, and abilities. Until SBInet fully implements a comprehensive human capital strategy, it will continue to risk not having staff with the right skills and abilities to successfully execute the program. Project 28 and other early technology and infrastructure projects are the first steps on a long journey towards SBInet implementation that will ultimately require an investment of billions of taxpayer dollars. Some of these early projects have encountered unforeseen problems that could affect DHS’s ability to meet projected completion dates, expected costs, and performance goals. These issues underscore the need for both DHS and Boeing, as the prime contractor, to continue to work cooperatively to correct the problems remaining with Project 28 and to ensure that the SBInet PMO has adequate staff to effectively plan and oversee future projects. These issues also underscore Congress’s need to stay closely attuned to DHS’s progress in the SBInet program to make sure that performance, schedule, and cost estimates are achieved and the nation’s border security needs are fully addressed. This concludes my prepared testimony. I would be happy to respond to any questions that members of the Subcommittees may have. For questions regarding this testimony, please call Richard M. Stana at (202) 512-8777 or StanaR@gao.gov. Other key contributors to this statement were Robert E. White, Assistant Director; Rachel Beers; Jason Berman; Katherine Davis; Jeanette Espínola; Taylor Matheson; and Sean Seales. To determine the progress that the Department of Homeland Security (DHS) has made in implementing the Secure Border Initiative (SBI) SBInet’s technology deployment projects, we analyzed DHS documentation, including program schedules, project task orders, status reports, and expenditures. We also interviewed DHS and the U.S. Customs and Border Protection (CBP) headquarters and field officials, including representatives of the SBInet Program Management Office (PMO), Border Patrol, CBP Air and Marine, and the DHS Science and Technology Directorate, as well as SBInet contractors. We visited the Tucson Border Patrol sector—the site where SBInet technology deployment was underway at the time of our review. To determine the progress that Department of Homeland Security (DHS) has made in infrastructure project implementation, we analyzed DHS documentation, including schedules, contracts, status reports, and expenditures. In addition, we interviewed DHS and CBP headquarters and field officials, including representatives of the SBInet PMO, and Border Patrol. We also interviewed officials from the U.S. Army Corps of Engineers and the Department of the Interior. We visited the Tucson and Yuma, Arizona Border Patrol sectors—two sites where tactical infrastructure projects were underway at the time of our review. We did not review the justification for infrastructure project cost estimates or independently verify the source or validity of the cost information. To determine the extent to which CBP has determined the impact of SBInet technology and infrastructure on its workforce needs and operating procedures, we reviewed documentation of the agency’s decision to hire an additional 6,000 agents and the progress hiring these agents. We also interviewed headquarters and field officials to track if and how CBP (1) is hiring and training its target number of personnel, (2) it is planning to train new agents on SBInet technology, and (3) it will incorporate the new system into its operational procedures, and any implementation challenges it reports facing in conducting this effort. To determine how the SBInet PMO defined its human capital goals and progress it has made in achieving these goals, we reviewed the office’s documentation on its hiring efforts related to SBInet, related timelines, and compared this information with agency goals. We determined that the workforce data were sufficiently reliable for purposes of this report. We also interviewed SBI and SBInet officials to identify challenges in meeting the goals and steps taken by the agency to address those challenges. We performed our work from April 2007 through October 2007 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In November 2005, the Department of Homeland Security (DHS) established the Secure Border Initiative (SBI), a multiyear, multibillion dollar program to secure U.S. borders. One element of SBI is SBInet--the U.S. Customs and Border Protection (CBP) program responsible for developing a comprehensive border protection system through a mix of security infrastructure (e.g., fencing), and surveillance and communication technologies (e.g., radars, sensors, cameras, and satellite phones). The House Committee on Homeland Security asked GAO to monitor DHS progress in implementing the SBInet program. This testimony provides GAO's observations on (1) SBInet technology implementation; (2) SBInet infrastructure implementation; (3) the extent to which CBP has determined the impact of SBInet technology and infrastructure on its workforce needs and operating procedures; and (4) how the CBP SBI Program Management Office (PMO) has defined its human capital goals and the progress it has made to achieve these goals. GAO's observations are based on analysis of DHS documentation, such as program schedules, contracts, status, and reports. GAO also conducted interviews with DHS officials and contractors, and visits to sites in the southwest border where SBInet deployment is underway. GAO performed the work from April 2007 through October 2007. DHS generally agreed with GAO's findings. DHS has made some progress to implement Project 28--the first segment of SBInet technology across the southwest border, but it has fallen behind its planned schedule. The SBInet contractor delivered the components (i.e., radars, sensors and cameras) to the Project 28 site in Tucson, Arizona on schedule. However, Project 28 is incomplete more than 4 months after it was to become operational--at which point Border Patrol agents were to begin using SBInet technology to support their activities. According to DHS, the delays are primarily due to software integration problems. In September 2007, DHS officials said that the Project 28 contractor was making progress in correcting the problems, but DHS was unable to specify a date when the system would be operational. Due to the slippage in completing Project 28, DHS is revising the SBInet implementation schedule for follow-on technology projects, but still plans to deploy technology along 387 miles of the southwest border by December 31, 2008. DHS is also taking steps to strengthen its contract management for Project 28. SBInet infrastructure deployment along the southwest border is on schedule, but meeting CBP's goal to have 370 miles of pedestrian fence and 200 miles of vehicle barriers in place by December 31, 2008, may be challenging and more costly than planned. CBP met its intermediate goal to deploy 70 miles of new fencing in fiscal year 2007 and the average cost per mile was $2.9 million. The SBInet PMO estimates that deployment costs for remaining fencing will be similar to those thus far. In the past, DHS has minimized infrastructure construction labor costs by using Border Patrol agents and Department of Defense military personnel. However, CBP officials report that they plan to use commercial labor for future fencing projects. The additional cost of commercial labor and potential unforeseen increases in contract costs suggest future deployment could be more costly than planned. DHS officials also reported other challenging factors they will continue to face for infrastructure deployment, including community resistance, environmental considerations, and difficulties in acquiring rights to land along the border. The impact of SBInet on CBP's workforce needs and operating procedures remains unclear because the SBInet technology is not fully identified or deployed. CBP officials expect the number of Border Patrol agents required to meet mission needs to change from current projections, but until the system is fully deployed, the direction and magnitude of the change is unknown. For the Tucson sector, where Project 28 is being deployed, Border Patrol officials are developing a plan on how to integrate SBInet into their operating procedures. The SBI PMO tripled in size during fiscal year 2007, but fell short of its staffing goal of 270 employees. Agency officials expressed concerns that staffing shortfalls could affect the agency's capacity to provide adequate contractor oversight. In addition, the SBInet PMO has not yet completed long-term human capital planning. |
Homeland security is a complex mission that involves a broad range of functions performed throughout government, including law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. Federal, state, and local governments have a shared responsibility in preparing for catastrophic terrorist attacks as well as other disasters. The initial responsibility for planning, preparing, and response falls upon local governments and their organizations—such as police, fire departments, emergency medical personnel, and public health agencies—which will almost invariably be the first responders to such an occurrence. For its part, the federal government has principally provided leadership, training, and funding assistance. The federal government’s role in responding to major disasters has historically been defined by the Stafford Act, which makes most federal assistance contingent on a finding that the disaster is so severe as to be beyond the capacity of state and local governments to respond effectively. Once a disaster is declared, the federal government—through the Federal Emergency Management Agency (FEMA)—may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. In addition to post disaster assistance, there has been an increasing emphasis over the past decade on federal support of state and local governments to enhance national preparedness for terrorist attacks. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense Against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar-Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in FEMA; the departments of Justice, Health and Human Services, and Energy; and the Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. As emphasis on terrorism prevention and response grew, however, so did concerns over coordination and fragmentation of federal efforts. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 in bioterrorism alone. Our past work, conducted prior to the establishment of an Office of Homeland Security and the current proposals to create a new Department of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. Further, our work found there was an absence of a central focal point that caused a lack of a cohesive effort and the development of similar and potentially duplicative programs. Also, as the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds from different programs administered by different agencies and have argued that the application process is burdensome and inconsistent among federal agencies. President Bush has taken a number of important steps in the aftermath of the terrorist attacks of September 11th to address the concerns of fragmentation and to enhance the country’s homeland security efforts, including creating of the Office of Homeland Security in October 2001, proposing the Department of Homeland Security in June 2002, and issuing a national strategy in July 2002. Both the House and Senate have worked diligently on these issues and are deliberating on a variety of homeland security proposals. The House has passed (H.R. 5005), and the Senate will take under consideration, after the August recess, legislation (S. 2452) to create a Department of Homeland Security. While these proposals would both transfer the functions, responsibilities, personnel, and other assets of existing agencies into the departmental structure, each bill has unique provisions not found in the other. For example, while both bills establish an office for State and Local Government Coordination and a first responder council to advise the department, the Senate bill also establishes a Chief Homeland Security Liaison Officer appointed by the Secretary and puts federal liaisons in each state to provide coordination between the department and the state and local first responders. The proposal to create a statutorily based Department of Homeland Security holds promise to better establish the leadership necessary in the homeland security area. It can more effectively capture homeland security as a long-term commitment grounded in the institutional framework of the nation’s governmental structure. As we have previously noted, the homeland security area must span the terms of various administrations and individuals. Establishing homeland security leadership by statute will ensure legitimacy, authority, sustainability, and the appropriate accountability to the Congress and the American people. The proposals call for the creation of a Cabinet department that would be responsible for coordination with other executive branch agencies involved in homeland security, including the Federal Bureau of Investigation and the Central Intelligence Agency. Additionally, the proposals call for coordination with nonfederal entities and direct the new Secretary to reach out to state and local governments and the private sector in order to: ensure adequate and integrated planning, training, and exercises occur, and that first responders have the necessary equipment; attaining interoperability of the federal government’s homeland security communications systems with state and local governments’ systems; oversee federal grant programs for state and local homeland security efforts; and coordinate warnings and information to state and local government entities and the public. Many aspects of the proposed consolidation of homeland security programs are in line with previous recommendations and show promise towards reducing fragmentation and improving coordination. For example, the new department would consolidate federal programs for state and local planning and preparedness from several agencies and place them under a single organizational umbrella. Based on our prior work, we believe that the consolidation of some homeland security functions makes sense and will, if properly organized and implemented, over time lead to more efficient, effective, and coordinated programs, better intelligence sharing, and a more robust protection of our people, borders, and critical infrastructure. However, as the Comptroller General has recently testified,implementation of the new department will be an extremely complex task, and in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it effective. Further, some aspects of the new department, as proposed, may result in yet other concerns. For example, as we reported on June 25, 2002, the new department could include public health assistance programs that have both basic public health and homeland security functions. These dual-purpose programs have important synergies that should be maintained and could potentially be disrupted by such a change. The recently issued national strategy for homeland security states it is intended to answer four basic questions: what is “homeland security” and what missions does it entail; what does the nation seek to accomplish, and what are the most important goals of homeland security; what is the federal executive branch doing now to accomplish these goals and what should it do in the future; and what should non-federal governments, the private sector, and citizens do to help secure the homeland. Within the federal executive branch, the key organization for homeland security will be the proposed Department of Homeland Security. The Department of Defense will contribute to homeland security, as well other departments such as the Departments of Justice, Agriculture, and Health and Human Services. The national strategy also makes reference to using tools of government such as grants and regulations to improve national preparedness. The national strategy defines homeland security as a concerted national effort to 1) prevent terrorist attacks within the United States, 2) reduce America’s vulnerability to terrorism, 3) minimize the damage, and 4) recover from attacks that do occur. This definition should help the government more effectively administer, fund, and coordinate activities both inside and outside the proposed new department and ensure all parties are focused on the same goals and objectives. The three parts of the definition form the national strategy’s three objectives. The strategy identifies six critical mission areas, and outlines initiatives in each of the six mission areas. It further describes four foundations that cut across these mission areas and all levels of government. These foundations— law; science and technology; information sharing and systems; and international cooperation— are intended to provide a basis for evaluating homeland security investments across the federal government. Table 1 summarizes key intergovernmental roles in each of the six mission areas as presented in the strategy. With regard to the costs of Homeland Security, the national strategy emphasizes government should fund only those homeland security activities that are not supplied, or are inadequately supplied, in the market, and cost sharing between different governmental levels should reflect federalism principles and different tools of government. In terms of the financial contributions made by state and local government to homeland security, the strategy acknowledges that state and local governments are incurring unexpected costs defending or protecting their respective communities. These costs include protecting critical infrastructure, improving technologies for information sharing and communications, and building emergency response capacity. At this time, the National Governors’ Association estimates that additional homeland security- related costs, incurred since September 11th and through the end of 2002, will reach approximately $6 billion. Similarly, the U.S. Conference of Mayors has estimated the costs incurred by cities during this time period to be $2.6 billion. The proposed department will be a key player in the daunting challenge of defining the roles of the various actors within the intergovernmental system responsible for homeland security. In areas ranging from fire protection to drinking water to port security, the new threats are prompting a reassessment and shift of longstanding roles and responsibilities. However, until this time, proposed shifts in roles and responsibilities have been considered on a piecemeal and ad hoc basis without benefit of an overarching framework and criteria to guide this process. The national strategy recognizes that the process is challenging because of the structure of overlapping federal, state, and local governments given that our country has more than 87,000 jurisdictions. The national strategy further notes that the challenge is to develop interconnected and complementary systems that are reinforcing rather than duplicative. The proposals for a Department of Homeland Security call for the department to reach out to state and local governments and the private sector to coordinate and integrate planning, communications, information, and recovery efforts addressing homeland security. This is important recognition of the critical role played by nonfederal entities in protecting the nation from terrorist attacks. State and local governments play primary roles in performing functions that will be essential to effectively address our new challenges. Much attention has already been paid to their role as first responders in all disasters, whether caused by terrorist attacks or natural hazards. The national strategy emphasizes the critical role state and local governments play in homeland security and the need for coordination between all levels of government. The national strategy emphasizes that homeland security is a shared responsibility. In addition, the national strategy has several initiatives designed to improve partnerships and coordination. Table 1 provides several examples of areas with key intergovernmental roles and coordination. For example, there are initiatives to improve intergovernmental law enforcement coordination and enabling effective partnerships with state and local governments and the private sector in critical infrastructure protection. States are asked to take several legal initiatives, such as coordinating suggested minimum standards for state driver’s licenses and reviewing quarantine authorities. Many initiatives are intended to develop or enhance first responder capabilities, such as initiatives to improve the technical capabilities of first responders or enable seamless communication among all responders. In many cases, these initiatives will rely on federal, state, and local cooperation, some standardization, and the sharing of costs. Achieving national preparedness and response goals hinges on the federal government’s ability to form effective partnerships with nonfederal entities. Therefore, federal initiatives should be conceived as national, not federal in nature. Decision makers have to balance the national interest of prevention and preparedness with the unique needs and interests of local communities. A “one-size-fits-all” federal approach will not serve to leverage the assets and capabilities that reside within state and local governments and the private sector. By working collectively with state and local governments, the federal government gains the resources and expertise of the people closest to the challenge. For example, protecting infrastructure such as water and transit systems lays first and most often with nonfederal levels of government. Just as partnerships offer opportunities, they also pose risks based upon the different interests reflected by each partner. From the federal perspective, there is the concern that state and local governments may not share the same priorities for use of federal funds. This divergence of priorities can result in state and local governments simply replacing (“supplanting”) their own previous levels of commitment in these areas with the new federal resources. From the state and local perspective, engagement in federal programs opens them up to potential federal preemption and mandates. From the public’s perspective, partnerships if not clearly defined, risk blurring responsibility for the outcome of public programs. Our fieldwork at federal agencies and at local governments suggests a shift is potentially underway in the definition of roles and responsibilities between federal, state, and local governments with far reaching consequences for homeland security and accountability to the public. The challenges posed by the new threats are prompting officials at all levels of government to rethink long-standing divisions of responsibilities for such areas as fire services, local infrastructure protection, and airport security. Current homeland security proposals recognize that the unique scale and complexity of these threats call for a response that taps the resources and capacities of all levels of government as well as the private sector. In many areas, these proposals would impose a stronger federal presence in the form of new national standards or assistance. For instance, the Congress is considering proposals to mandate new vulnerability assessments and protective measures on local communities for drinking water facilities. Similarly, new federal rules have mandated local airport authorities to provide new levels of protection for security around airport perimeters. The block grant proposal for first responders would mark a dramatic upturn in the magnitude and role of the federal government in providing assistance and standards for fire service training and equipment. Additionally, the national strategy suggests initiatives for an expanded state role in several areas. For example, there are no national or agreed upon state standards for driver’s license content, format, or acquisition procedures. The strategy states that the federal government should support state-led efforts to develop suggested minimum standards for drivers’ licenses. In another example, in order to suppress money laundering, the strategy recommends that states assess the current status of their regulation regarding providers of financial services and work to adopt uniform laws as necessary. Governments at the local level are also moving to rethink roles and responsibilities to address the unique scale and scope of the contemporary threats from terrorism. Numerous local general-purpose governments and special districts co-exist within metropolitan regions and rural areas alike. Many regions are starting to assess how to restructure relationships among contiguous local entities to take advantage of economies of scale, promote resource sharing, and improve coordination of preparedness and response on a regional basis. In our case studies of five metropolitan areas, we have identified several common forms of regional cooperation and coordination including special task forces or working groups, improved collaboration among public health entities, increased countywide planning, mutual aid agreements, and communications. These partnerships are at varying stages of development and are continuing to evolve. Table 2 summarizes these initiatives. Although promising greater levels of protection than before, these shifts in roles and responsibilities have been developed on an ad hoc piecemeal basis without the benefit of common criteria. An ad hoc process may not capture the real potential each actor in our system offers. Moreover, a piecemeal redefinition of roles risks the further fragmentation of the responsibility for homeland security within local communities, blurring lines of responsibility and accountability for results. While federal, state, and local governments all have roles to play, care must be taken to clarify who is responsible for what so that the public knows whom to contact to address their problems and concerns. Current homeland security initiatives provide an opportunity to more systematically identify the unique resources and capacities of each level of government and better match these capabilities to the particular tasks at hand. If implemented in a partnerial fashion, the national strategy can also promote the participation, input, and buy in of state and local partners whose cooperation is essential for success. The proposed department, in fulfilling its broad mandate, has the challenge of developing a national performance focus. The national strategy is a good start in defining strategic objectives and related mission areas, plus foundations that cut across the mission areas. The national strategy’s initiatives to implement the objectives under the related mission and foundation areas extend from building capabilities to achieving specific outcomes. According to the national strategy, each department and agency is to be held accountable for its performance on homeland security efforts. However, the initiatives often do not provide a baseline set of goals and measures upon which to assess and improve many of its initiatives to prevent attacks, reduce the nation’s vulnerability to attacks, or minimize the damage and recovering from attacks that do occur. For example, the initiative of creating “smart borders” requires a clear specification of what is expected of a smart border, including consideration of security and economic aspects of moving people and goods. Specific performance goals and measures for many initiatives will occur at a later date. The strategy states that each department or agency will create benchmarks and other performance measures to evaluate progress and allocate future resources. Performance measures will be used to evaluate the effectiveness of each homeland security program, allowing agencies to measure their progress, make resource allocation decisions, and adjust priorities. As the national strategy and related implementation plans evolve, we would expect clearer performance expectations to emerge. Given the need for a highly integrated approach to the homeland security challenge, national performance goals and measures may best be developed in a collaborative way involving all levels of government and the private sector. Assessing the capability of state and local governments to respond to catastrophic terrorist attacks is an important feature of the national strategy and the responsibilities of the proposed new department. The President’s fiscal year 2003 budget proposal acknowledged that our capabilities for responding to a terrorist attack vary widely across the country. The national strategy recognizes the importance of standards and performance measures in areas such as training, equipment, and communications. For example, the national strategy proposes the establishment of national standards for emergency response training and preparedness. These standards would require certain coursework for individuals to receive and maintain certification as first responders and for state and local governments to receive federal grants. Under the strategy, the proposed department would establish a national exercise program designed to educate and evaluate civilian response personnel at all levels of government. It would require individuals and government bodies to complete successfully at least one exercise every year. The department would use these exercises to measure performance and allocate future resources. Standards are being developed in other areas associated with homeland security, yet formidable challenges remain. For example, national standards that would apply to all ports and all public and private facilities are well under way. In preparing to assess security conditions at 55 U.S. ports, the Coast Guard’s contractor has been developing a set of standards since May 2002. These standards cover such things as preventing unauthorized persons from accessing sensitive areas, detecting and intercepting intrusions, and checking backgrounds of those whose jobs require access to port facilities. However, challenges remain in finalizing a complete set of standards for the level of security needed in the nation’s ports, resolving issues between key stakeholders that have conflicting or competing interests, and establishing mechanisms for enforcement. Moreover, because security at ports is a concern shared among federal, state, and local governments, as well as among private commercial interests, the issue of who should pay to finance antiterrorism activities may be difficult to resolve. Communications is an example of an area for which standards have not yet been developed, but various emergency managers and other first responders have continuously highlighted that standards are needed. State and local governments often report that there are deficiencies in their communications capabilities, including the lack of interoperable systems. The national strategy recognizes that it is crucial for response personnel to have and use equipment, systems, and procedures that allow them to communicate. Therefore, the strategy calls for the proposed Department of Homeland Security to develop a national communication plan to establish protocols (who needs to talk to whom), processes, and national standards for technology acquisition. According to the national strategy, this is a priority for fiscal year 2003 funding which ties all federal grant programs that support state and local purchase of terrorism-related communications equipment to this communication plan. The establishment of specific national goals and measures for homeland security initiatives, including preparedness, will not only go a long way towards assisting state and local entities in determining successes and areas where improvement is needed, but could also be used as goals and performance measures as a basis for assessing the effectiveness of federal programs. The Administration should take advantage of the Government Performance and Results Act (GPRA) and its performance tools of strategic plans, annual performance plans and measures, and accountability reports for homeland security implementation planning. At the department and agency level, until the new department is operational, GPRA can be a useful tool in developing homeland security implementation plans within and across federal agencies. Given the recent and proposed increases in homeland security funding, as well as the need for real and meaningful improvements in preparedness, establishing clear goals and performance measures is critical to ensuring both a successful and fiscally responsible effort. The choice and design of the policy tools the federal government uses to engage and involve other levels of government and the private sector in enhancing homeland security will have important consequences for performance and accountability. Governments have a variety of policy tools including grants, regulations, tax incentives, and information-sharing mechanisms to motivate or mandate other levels of government or the private sector to address security concerns. The choice of policy tools will affect sustainability of efforts, accountability and flexibility, and targeting of resources. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. The national strategy acknowledges the shared responsibility of providing homeland security between federal, state, and local governments, and the private sector and recognizes the importance of using tools of government such as grants, regulations, and information sharing to improve national preparedness. The federal government often uses grants to state and local governments as a means of delivering federal assistance. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad, national purpose and to provide a great deal of discretion to state and local officials. In designing grants, it is important to (1) target the funds to states and localities with the greatest need based on highest risk and lowest capacity to meet these needs from their own resource bases, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as supplantation, with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. At their best, grants can stimulate state and local governments to enhance their preparedness to address the unique threats posed by terrorism. Ideally, grants should stimulate higher levels of preparedness and avoid simply subsidizing local functions that are traditionally state or local responsibilities. One approach used in other areas is the “seed money” model in which federal grants stimulate initial state and local activity with the intent of transferring responsibility for sustaining support over time to state and local governments. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president’s fiscal year 2003 budget, have included some of these provisions. This grant would be used by state and local governments to purchase equipment; train personnel; and exercise, develop, or enhance response plans. Once the details of the grant have been finalized, it will be useful to examine the design to assess how well the grant will target funds, discourage supplantation, and provide the appropriate balance between accountability and flexibility, and whether it provides temporary “seed money” or represents a long-term funding commitment. Other federal policy tools can also be designed and targeted to elicit a prompt, adequate, and sustainable response. In the area of regulatory authority, the federal, state, and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, transportation systems, water systems, and public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors. Regulations have recently been enacted in the area of infrastructure. For example, a new federal mandate requires that local drinking water systems in cities above a certain size provide a vulnerability assessment and a plan to remedy vulnerabilities as part of ongoing EPA reviews, while the Transportation and Aviation Security Act grants the Department of Transportation authority to order deployment of local law enforcement personnel in order to provide perimeter access security at the nation’s airports. In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. Several models of shared regulatory authority offer a range of approaches that could be used in designing standards for preparedness. Examples of these models range from preemption through fixed federal standards to state and local adoption of voluntary standards formulated by quasi- official or nongovernmental entities. As the administration noted, protecting America’s infrastructure is a shared responsibility of federal, state, and local government, in active partnership with the private sector, which owns approximately 85 percent of our nation’s critical infrastructure. To the extent that private entities will be called upon to improve security over dangerous materials or to protect critical infrastructure, the federal government can use tax incentives to encourage or enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI. As the U.S. Conference of Mayors noted, a close working partnership of federal and local law enforcement agencies, which includes the sharing of information, will expand and strengthen the nation’s overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of information among federal agencies. An expansion of this act has been proposed (S1615; H.R. 3285) that would provide for information sharing among federal, state, and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored, Mr. Chairman, addresses a number of information-sharing needs. For instance, the proposed legislation provides that the Attorney General expeditiously grant security clearances to Governors who apply for them and to state and local officials who participate in federal counterterrorism working groups or regional task forces. The national strategy also includes several information-sharing and systems initiatives to facilitate dissemination of information from the federal government to state and local officials. For example, the strategy supports building and sharing law enforcement databases, secure computer networks, secure video teleconferencing capabilities, and more accessible websites. It also states that the federal government will make an effort to remove classified information from some documents to facilitate distribution to more state and local authorities. The recent publication of the national strategy is an important initial step in defining homeland security, setting forth key strategic objectives, and specifying initiatives to implement them. The proposals for the Department of Homeland Security represent recognition by the administration and the Congress that much still needs to be done to improve and enhance the security of the American people and our country’s assets. The proposed department will clearly have a central role in the success of efforts to strengthen homeland security, and has primary responsibility for many of the initiatives in the national homeland security strategy. Moreover, given the unpredictable characteristics of terrorist threats, it is essential that the strategy be implemented at a national rather than federal level with specific attention given to the important and distinct roles of state and local governments. Accordingly, decision makers will have to balance the federal approach to promoting homeland security with the unique needs, capabilities, and interests of state and local governments. Such an approach offers the best promise for sustaining the level of commitment needed to address the serious threats posed by terrorism. This completes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-9573 or JayEtta Hecker at (202) 512-2834. Other key contributors to this testimony include Matthew Ebert, Thomas James, David Laverny- Rafter, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Homeland Security: Critical Design and Implementation Issues. GAO- 02-957T. Washington, D.C.: July 17, 2002. Homeland Security: New Department Could Improve Coordination but Transferring Control of Certain Public Health Programs Raises Concerns. GAO-02-954T. Washington, D.C.: July 16, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Homeland Security: Title III of the Homeland Security Act of 2002. GAO-02-927T. Washington, D.C.: July 9, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-901T. Washington, D.C.: July 3, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Homeland Security: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Homeland Security: Progress Made, More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Homeland Security: New Department Could Improve Coordination but may Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO-02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01-915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. | The challenges posed by homeland security exceed the capacity and authority of any one level of government. Protecting the nation against these threats calls for a truly integrated approach, bringing together the resources of all levels of government. The proposed Department of Homeland Security will clearly have a central role in efforts to enhance homeland security. The proposed consolidation of homeland security programs has the potential to reduce fragmentation, improve coordination, and clarify roles and responsibilities. Realistically, the challenges that the new department faces will clearly require substantial time and effort, and it will take additional resources to make it effective. Moreover, formation of a department should not be considered a replacement for the timely issuance of a national homeland security strategy to guide implementation of the complex mission of the department. Appropriate roles and responsibilities within and between the levels of government and with the private sector are evolving and need to be clarified. New threats are prompting a reassessment and shifting of long-standing roles and responsibilities, but these shifts are being considered on a piecemeal basis without benefit of an overarching framework and criteria to guide the process. A national strategy could provide such guidance by more systematically identifying the unique capacities and resources of each level of government to enhance homeland security and by providing increased accountability within the intergovernmental system. The nation does not yet have performance goals and measures upon which to assess and improve preparedness and develop common criteria that can demonstrate success, promote accountability, and determine areas where additional resources are needed, such as improving communications and equipment interoperability. A careful choice of the most appropriate tools is critical to achieve and sustain national goals. The choice and design of policy tools, such as grants, regulations, and tax incentives, can enhance the capacity of all levels of government to target areas of highest risk and greatest need, promote shared responsibilities, and track progress toward achieving preparedness goals. |
The Troops to Teachers program is a federal program that began operations in 1994 with two goals: (1) to help military personnel affected by downsizing become teachers and (2) to ease the teacher shortage, especially in math and science and in areas with concentrations of children from low-income families. The program offers information on state teacher certification requirements and job referral and job placement assistance to active and former military personnel who are interested in pursuing teaching as a second career after leaving the military. According to TTT program data, military officers represent a major participant group. During 1994 and 1995, the program also offered financial incentives to military personnel and school districts to participate in the program. Participants who received stipends of up to $5,000 and became certified were required to teach for 5 years. School districts could receive grants of up to $50,000 paid over 5 years for each TTT participant they hired. The program stopped awarding new stipends and grants after 1995 when funds were no longer appropriated for this purpose. The program is administered by DoD’s Defense Activity for Non- Traditional Education Support (DANTES). DANTES and 24 state TTT offices carry out the program’s efforts to ease former military personnel into teaching. (See fig. 1.) States voluntarily join the TTT program. States that wish to join submit proposals to DANTES describing the services they plan to provide and the activities in which they plan to engage to achieve the TTT program goals. If the proposal is approved, DANTES signs a memorandum of agreement with the state agency responsible for the TTT program, most often the state’s department of education. DANTES provides funds for state program expenses, although the state TTT representatives are not federal employees. From fiscal year 1994 through 2000, DANTES spent $5.5 million on program administration and provided states with a total of $12.1 million to operate their TTT offices, according to program officials. States that joined the program have had a great deal of flexibility in how they operate the TTT program in their state. State offices determine their own organizational structure, the amount of resources they will devote to the program, and the services they will provide. Sixteen states had joined TTT by 1995 and 8 more joined between 1998 and 2000. DANTES and state TTT offices operate as a network to provide services to military personnel interested in becoming teachers. As part of this network, DANTES serves the following functions: Acting as the central liaison for all the military services and the state education offices and promoting the program at a national level. Approving and monitoring the memorandum of agreements. Working with the states to share recruitment practices. Maintaining the TTT program web site with links to state offices. Facilitating the transition from military life to teaching in the 26 states and the District of Columbia without TTT placement assistance offices. Monitoring the teaching commitments of the people who received stipends and any school districts that received grants on behalf of persons who applied to the TTT program during 1994 and 1995. For their part, most state offices provide a broad range of services, including providing personalized counseling and advice to those who wish to promoting the TTT program to school districts and the military promoting military personnel as potential teachers, maintaining an 800 number and the state link on the TTT web site with information and school district openings, and working to lessen costs and time required for military personnel to obtain certification. The environment in which TTT functions has changed in ways that have implications for the program’s future operations. In 1998, the military downsizing leveled off, essentially removing the first goal of the TTT program. DANTES’ responsibility for monitoring the teaching commitments of those who received stipends and grants between 1994 and 1995 will end in a few years. Thirteen additional states currently have contacted DANTES and are waiting to join the program, either independently or as a consortium. The Congress appropriated $3 million for TTT in fiscal year 2001 under the Eisenhower Professional Development Program, placing TTT within Education’s broader initiative to support teacher recruitment. The Eisenhower Program also provides additional funds in grants to states and/or organizations that wish to develop new avenues for attracting teachers, especially second-career teachers. The President’s 2002 budget proposes to support and expand TTT activities through the Transition to Teaching program. The $30 million budget proposed for Transition to Teaching would assist nonmilitary as well as military professionals with becoming teachers. According to TTT records, 3,821 of the 13,756 people accepted into the program were hired as teachers from fiscal years 1994 through 2000. However, this number probably underrepresents the number of people who have used program services and become teachers. Of those participants hired as teachers, over 90 percent remained in teaching past the first year. TTT program records show that 17,459 people applied to the program from fiscal years 1994 through 2000 and, of these, 13,756 were accepted into the program. Of these participants, 3,821, or 28 percent, became teachers. (See table 1.) More than 85 percent of the TTT teachers were hired in states with TTT offices. While no formal documentation was maintained on reasons for the withdrawals of 8,554 applicants accepted in the program who did not become teachers, the TTT program director provided several reasons why some participants withdrew from the program. For instance, some military personnel said they had found a better paying job, some realized that they would not like teaching, and others thought the cost and time of the alternative certification process was onerous. It is difficult to ascertain the full extent of TTT program participation, because program data are incomplete. When the stipends and incentive grants ended after 1995, it became difficult to track the number of people using the program’s resources because they were less inclined to complete application forms and respond to surveys that tracked program retention. In addition, with the creation of the TTT web site, people could access information they needed to find certification programs and teaching positions and do so without applying to the program. Consequently, the number of people who used the program to become teachers is probably understated. DANTES officials told us that they believe their numbers undercount the total number of teachers hired as a result of the TTT program. Similarly, some state TTT officials said that DANTES records may substantially undercount the number of former military personnel they have placed in teaching positions. Six of the 10 state TTT officials that we contacted said this was the case, but only 4 states—Colorado, Mississippi, South Carolina, and Texas—kept records with additional information on military persons whom they placed in teaching positions whether or not they completed a TTT program application. Table 2 shows the difference between DANTES’ records and state records for the number of teachers hired within these states. Available TTT program data also show that over 90 percent of TTT teachers remained in teaching after their first year. The percent of TTT teachers who remain in teaching for at least 3 years is about the same as that for all teachers nationwide, and the percent of TTT teachers that remain for 5 years is markedly better. (See table 3.) However, these retention rates should be considered in light of the fact that TTT teachers who received stipends had to teach for 5 years to pay off their financial commitment. In addition, these data are based solely on teachers who received funding (2,135) and do not include those who did not. However, a TTT program survey done in 1999 of school districts that hired TTT teachers—including those who completed applications and follow-up surveys but did not receive funding—showed similar results. According to TTT program records and NCEI survey data, a higher percentage of TTT teachers overall taught math, science, special education, and vocational education and taught in inner city schools and high schools than all teachers nationwide. (See table 4.) For example, 20 percent of TTT teachers compared with 5 percent of teachers nationwide taught general special education. Also, a higher percentage of TTT teachers are male (86 percent) and minority (33 percent) than the national percentages (26 percent and 11 percent, respectively). Many states that joined the TTT program said that they did so because the program would enable them to fill positions in subjects or geographic areas in which they had shortages, especially in math, science, special education, and vocational education and in inner city schools. They also cited the program’s potential for increasing the diversity of its teacher workforce, some specifically mentioned male and minority teachers as a factor in their decisions to join the TTT program. Several factors may have affected—both positively and negatively—the number of military personnel applying to the TTT program and the number hired as teachers. The positive factors were (1) the TTT stipends, (2) the TTT incentive grants, (3) the increased demand for teachers, and (4) accomplishments of state TTT offices. The negative factors were (1) increased demands for specialized workers, (2) economic growth, and (3) a reduction in the number of officers leaving the military. The following factors may have increased the number of TTT applicants and/or teachers hired. Stipends. During the first 2 years of the program, stipends lowered the cost of obtaining teacher certification for TTT participants. In a DANTES survey of TTT teachers who had completed their 5-year teaching commitment for receiving the stipend, 59 percent reported that the TTT program was very important in making their decision to become a teacher, and 68 percent reported that the stipend was the most important feature of the TTT program. Incentive grants. During the first 2 years of the program, TTT incentive grants lowered the cost to school districts of hiring TTT teachers relative to other job candidates, thereby increasing the demand for TTT teachers. The increased probability of being hired would have made the program more attractive to applicants. Demand for teachers. Education data show that teacher shortages became more widespread in 1998, thus the demand for teachers expanded and intensified. The increased likelihood of employment for TTT teachers after certification could have increased the number of applicants to the program. Accomplishments of state TTT offices. State TTT offices have experienced some success in decreasing the time and cost of teacher certification for military personnel and in increasing the demand among school districts for TTT hires. Both of these accomplishments probably made the program more attractive to potential applicants. More alternative teacher certification programs are available to persons pursuing second careers as teachers, including military personnel, sometimes as a direct result of the TTT program. For example, the Florida, Wisconsin, and Washington state TTT offices played roles in convincing their state legislatures in 2000 to authorize new alternative teacher certification programs. Some state TTT offices, working with DANTES, created opportunities for military personnel to satisfy some teacher certification requirements while still on active duty. For example, the Texas TTT office, working in conjunction with three Texas universities, implemented a distance learning program in the Fall 2000 offering teacher certification classes at military bases worldwide. Texas also worked with DANTES to make its teacher certification examination available at military bases worldwide. Some states lowered the cost of teacher certification for military personnel in response to the efforts of their state TTT office. For example, California and Washington reduced the fees they charged military personnel to take courses at state universities. Outreach and promotional activities by state TTT offices increased school districts’ demand for TTT hires. For example, the Colorado, Illinois, North Carolina, and Ohio TTT offices increased the number of school districts that posted their teacher vacancies on the TTT data base. The following factors may have decreased the number of TTT applicants and/or teachers hired. Demand for specialized workers. A nationwide increase in demand for workers with math/science backgrounds, especially in information technology and the sciences, which generally pay higher salaries than teaching, may have attracted potential military personnel with these skills away from pursuing a teaching career. Between 1994 and 1999, the number of workers employed in the mathematical and computer sciences increased by almost 56 percent while total employment increased by about 8.5 percent. Economic growth. The general growth in the economy in the 1990s increased the number of alternative job opportunities for those leaving the military. An important indicator of economic growth and the demand for labor is the unemployment rate. The greater the economic growth, the greater the demand for labor and the lower the unemployment rate. Between 1994 and 1999, the unemployment rate declined from 6.1 percent to 4.2 percent. Reduction in supply of applicants. The number of retired commissioned officers, warrant officers, and high-graded noncommissioned officers declined from 34,335 to 26,612 between 1994 and 1999. This group comprised 76 percent of all TTT applicants during this period. The TTT program is currently functioning in an environment that differs greatly from when it began 7 years ago. Its first purpose, to place military persons affected by downsizing initiatives in the classroom, has essentially been eliminated while its second purpose, to address teacher shortages, has become a more critical national issue. Also, the transition to teacher from a different profession has become easier in many states through new or expanded alternative teacher certification programs. With the recent transfer of TTT from DoD to Education, it is too early to determine how TTT will fit into Education’s mission and its broader teacher recruitment and retention initiatives. However, this new environment presents opportunities for Education to explore how best to coordinate the TTT program with other education programs to address the nation’s growing teacher shortage problem. We provided Education and DoD with a draft of this report for review, and both agencies provided comments via e-mail. Education noted that it has other programs to increase the number of qualified teachers, including the Transition to Teacher and Eisenhower Professional Development programs, and that the information in the report will be valuable as the Department continues to explore ways that these programs can collaborate and strengthen services. DoD said that it has reviewed the report and accepted the report’s conclusions. We are sending copies of this report to the Honorable Roderick R. Paige, Secretary of the Department of Education, and other interested parties. We will also make copies available to others on request. If you or your staffs have any questions about this report, please contact me on (202) 512-7215 or Karen Whiten at (202) 512-7291. Key contributors to this report were Mary Roy, Ellen Habenicht, Richard Kelley, Barbara Smith, and Patrick DiBattista. | In response to a shortage of math and science teachers and reductions in U.S. military personnel, Congress created the Troops to Teachers (TTT) program in 1992. Until 1995, the program, which was run by the Defense Department, offered stipends to program participants and incentive grants to school districts to hire TTT teachers. Congress transferred the program from DOD to the Department of Education in 1999. This report reviews the program from its beginning in January 1994 until its transfer to Education. GAO found that 13,756 former military personnel applied to the program and were accepted. Of these, 3,821 were hired as teachers from 1994 through 2000; more than 90 percent of those applicants hired as teachers remained in teaching after the first year. However, these participation figures most likely represent the minimum number of former military personnel who used the program's services and became teachers because the figures include only those persons who formally applied to the TTT program and who completed follow-up surveys. Compared with all teachers nationwide, a higher percentage of TTT teachers overall taught math, science, special education, and vocational education and taught in inner city schools and high schools. Factors such as stipends, incentive grants, economic conditions, and state initiatives may have influenced the number of people who applied to the program and became teachers. |
Under the U.S. Housing Act of 1937, as amended, Congress created the public housing program to provide decent and safe rental housing for eligible low-income families, the elderly, and persons with disabilities. HUD provides subsidies for operating and maintaining public housing units through the Operating Fund. HUD also provides funds to modernize and develop public housing units through the Capital Fund. Public housing agencies administer these formula grant programs on HUD’s behalf. In using these funds, the agencies are responsible for ensuring that the housing is affordable to eligible low-income households. In 1992, Congress established the Urban Revitalization Demonstration Program, commonly known as HOPE VI, which provides grants to housing agencies to rehabilitate or rebuild severely distressed public housing. The Single Audit Act, as amended, requires state and local governments and nonprofit organizations that expend $500,000 or more in federal awards in a fiscal year to have either a single audit or a program-specific audit. Under a single audit, the auditor must report its opinion on the presentation of the entity’s financial statements and schedule of federal expenditures, and on compliance with applicable laws, regulations, and provisions of contracts or grant agreements that could have a direct and material effect on the financial statements and, when applicable, on any major program of the audited entity. The auditor must also report the results of its review and testing of internal control related to the financial statements and major programs as well as a schedule of any audit findings and questioned costs. Public housing agencies subject to the act must submit audit reports to HUD and to the Federal Audit Clearinghouse. Single audit is a key assurance process used by HUD in its oversight of public housing agencies. Single auditors are required by the act to identify and test programs for compliance with specific program requirements and report identified findings (or the lack thereof), which can include those of inappropriate use and mismanagement of funds. HUD’s Real Estate Assessment Center (REAC), which is responsible for reviewing housing agencies’ financial data, began tracking the status of single audit findings in 2007 using the Monitoring and Planning System (MAPS) Audit Tracking Module. Through this tracking system, HUD monitors the resolution of audit findings. HUD maintains an additional system, the Audit Resolution and Corrective Action Tracking System (ARCATS), to monitor the status of management and financial findings issued by our or OIG’s recommendations. HUD’s PHAS is another key oversight process. While single audits may identify specific instances of inappropriate use or mismanagement of public housing funds, PHAS was developed to evaluate the overall condition of housing agencies and measure performance in major operational areas of the public housing program. These include financial condition, management operations, and physical condition of the housing agencies’ public housing programs as well as resident satisfaction with the programs (see fig. 1). The Financial Assessment Subsystem (FASS) component score is based on six financial data analyses that HUD has determined to be key in evaluating housing agencies’ financial condition. These include the following: the current ratio, which measures the housing agency’s ability to cover its short-term obligations; the months expendable funds balance ratio, which measures the housing agency’s reserves for unexpected expenses; the tenant receivables outstanding, which measures how well the housing agency manages rent collections; the occupancy loss, which measures how well the housing agency maximizes its revenue by renting out vacant units; the expense management, which measures whether the housing agency has adequate cost controls to manage expenses; and the net income, which measures whether the housing agency is spending more than it makes. Housing agencies with scores of less than 60 percent in either the overall PHAS score (less than 60 points out of a total 100 points) or in any one of the major subcomponents, including the FASS (less than 18 points out of 30 points), are designated as substandard performers, also known as troubled. For example, points may be deducted from housing agencies’ FASS scores if the six financial analyses indicate that housing agencies are experiencing overall financial difficulties that may threaten the stability of the housing agency. According to HUD officials, although housing agencies’ financial data are usually audited, REAC analysts also review financial data to ensure the accuracy of information that is used to calculate the FASS scores. For example, REAC analysts examine specific line items in the financial data, investigate changes or discrepancies in amounts reported, and review auditors’ notes. HUD field offices establish mechanisms to identify and correct deficiencies if a housing agency is designated as a troubled performer in either the PHAS overall score or the FASS subcomponent. Such mechanisms may include the development of an improvement plan or a memorandum of agreement. If technical assistance and sanctions fail to result in significant improvement within a year, the housing agency is referred to the HUD Enforcement Center, which may institute proceedings to place the agency in receivership and remove failed agency management. Receiverships generally result from long-standing, severe, and persistent management problems that have led to the deterioration of the public housing stock. Single audits, HUD’s primary tool for overseeing the use of public housing funds, are intended to, among other things, promote sound financial management by serving as a key accountability mechanism in the oversight and monitoring of recipients’ use of federal funds. Single audits provide federal agencies with information on the use of federal funds, internal control deficiencies, and compliance with federal program requirements. The Single Audit Act that mandated the audits does not require the auditors to perform procedures that focus specifically on all federal programs that housing agencies administer. Instead, auditors are to select and closely audit “major” programs administered by the recipient of the funds—typically large or risky programs—for compliance with specific program requirements, including the appropriate use of funds. In addition to single audits, OIG selects various housing agencies to audit as part of its oversight of HUD’s public housing program. HUD’s strategic goals for its public housing program call for the department to resolve issues identified by these audits and improve its management of internal controls to, among other things, eliminate fraud, waste, and abuse. However, both HUD’s quality assurance office and a recent study conducted by PCIE have identified problems with single audits of housing agencies. OIG also identified problems with single audits after conducting detailed reviews of housing agency operations as part of its oversight process. Both OIG’s audits and quality assurance reviews by HUD have, in some cases, resulted in disciplinary action for single auditors. HUD quality control reviews cover housing agency audits as well as audits of other entities that receive HUD financial assistance. According to data from HUD’s Quality Assurance Subsystem, 52 of the 247 quality assurance reviews it conducted between 2000 and 2008 resulted in referrals for firms auditing entities receiving HUD funding. According to HUD, many of these included public housing agency audits. Examples of problem public housing agency audits include the following: Single audits for the Miami-Dade Housing Authority (MDHA) did not identify significant instances of inappropriate use and mismanagement of funds. In 2006, a media investigation provided extensive coverage of problems with public housing funds at the agency. HUD stated that MDHA’s single audits should have alerted the department to these problems, but they did not. Instead, HUD stated that it learned about the allegations of misuse and mismanagement of funds from the Miami Herald newspaper in 2006. In response to these allegations, HUD ordered a detailed review and new single audit of the housing agency that found serious and pervasive financial and management problems, including deficiencies in financial management, mismanagement of development funds, and several apparent conflicts of interest. MDHA went into receivership in 2007, with HUD taking possession and control of all of MDHA’s activities, including public housing. In 2007, OIG found that the Dallas Housing Authority had inaccurate, unreliable, and altered records. Further, OIG noted that the firm conducting the 2006 single audit had failed to meet professional auditing standards. State authorities took disciplinary action against the auditing firm in 2007. According to HUD, the Dallas Housing Authority engaged this firm after it had removed an earlier auditing firm that had identified problems with the housing agencies’ financial management. Also in June 2007, PCIE issued its Report on National Single Audit Sampling Project, which concluded that there were problems with audit quality that needed to be addressed and made recommendations. Specifically, PCIE reviewed a nationwide sample of 208 out of more than 38,000 single audits performed on various grant recipients that were submitted for the period between April 1, 2003, and March 31, 2004. According to the HUD OIG, the PCIE sample included single audits of 11 housing agencies, of which 6 were determined to be unacceptable, 1 was found to have limited reliability, and 4 were determined to be acceptable, but with deficiencies. In a 2007 GAO testimony at a congressional hearing on the PCIE study, we noted that problems with the quality of single audits were unacceptable and that we were concerned that audits were not being conducted in accordance with professional standards and requirements. We also noted that such audits could mislead users of audit reports, causing them to incorrectly conclude that agencies were in compliance with program requirements or did not have weaknesses in internal controls when in fact such problems might exist but had gone undetected. However, we also noted continued support for single audits as a key oversight mechanism over federal awards. Compounding the problems with the quality of audits is the fact that many housing agencies’ public housing programs may not have to undergo the detailed major program compliance testing that could potentially uncover inappropriate use and mismanagement of public housing funds. Only entities expending $500,000 or more in federal funding annually are required to receive single audits that assess compliance with specific program requirements and associated internal controls that have a direct and material effect on each major program. Furthermore, even when housing agencies receive single audits, not all public housing programs will be designated as major programs by independent public accountants conducting the single audit. For example, although about 76 percent of operating fund dollars were designated by the housing agencies’ independent auditors as major in 2006, less than a third of the housing agencies submitting approved financial data to HUD for that year underwent a single audit and had their operating funds audited as major programs. Without compensating monitoring tools, gaps in coverage at many housing agencies in any one year could allow emerging problems to go undetected and unreported by single audits in a timely manner. HUD has attempted to improve the quality of these audits but has faced significant challenges. As noted above, HUD performs quality assurance reviews of the independent public auditors who perform the audits. In some cases, these quality assurance reviews have led to investigations and actions against substandard auditors. HUD officials said that the agency could pursue debarment or suspension of poorly performing auditors but noted that such remedies were costly and time-consuming. Further, efforts to improve audit quality will not address gaps in coverage that leave some housing agencies and programs unaudited for more than a year. In light of these limitations, fully leveraging other useful information becomes key to ensuring more consistent monitoring of public housing funds. Although the quality and coverage of single audits can be problematic, the single audit continues to be an important oversight mechanism. Yet, HUD has not fully leveraged how it uses the results of audits to fully understand emerging or persistent problems at housing agencies. Understanding commonly occurring audit findings could be useful for identifying housing agencies that are at greater risk of inappropriate use or mismanagement of public housing funds and assessing vulnerabilities in HUD’s oversight processes. In 2002, we reported that summarizing information about the results of single audits and identifying commonly occurring issues could be valuable in helping management evaluate agency oversight, monitor activities, and identify problem areas. Further, the Domestic Working Group’s Guide to Opportunities for Improving Grant Accountability states that agencies can summarize the results of internal and external audits for program managers to help identify problems with grantees’ financial management and program operations. For example, officials from one federal grant program within the Department of Transportation stated to us that they had taken such an approach to using audit data by summarizing the results of audits to identify recurring issues and direct their oversight policies. These officials stated that their analysis had identified procurement-related problems as a recurring finding area, which resulted in staff conducting grantee workshops and technical assistance on federal procurement requirements. HUD does track the resolution of individual audit findings for each housing agency. Specifically, HUD uses its ARCATS to track the resolution of the HUD Inspector General’s audit findings. In addition, HUD recently developed MAPS to track the resolution of single audit findings. However, on the basis of our discussions with HUD officials, HUD has not used either ARCATS or MAPS to summarize and systematically evaluate the results of audits to understand problems that may commonly occur at multiple housing agencies, identify programwide problems of inappropriate use and mismanagement of public housing funds, detect emerging issues, or address possible vulnerabilities in its oversight processes. HUD’s systems contain data that categorize the findings of single audits in a manner that would allow the agency to identify commonly occurring problems across housing agencies. Yet, on the basis of our discussions with HUD, the department has not used these data for this purpose. We conducted an analysis of a sample of 81 OIG and 56 single audit reports of housing agencies from 2002 through 2007 and categorized the 526 audit findings we identified into major categories of inappropriate use and mismanagement of public housing funds that we developed. Examples of these categories and the number of findings in our sample included: accounting issues, including internal control and documentation findings (150); inappropriate transfer of operating funds to other HUD programs and non-HUD entities, including affiliated organizations that were not used for public housing purposes (98); and other management issues (41). We also noted that for 14 housing agencies where audits identified commonly occurring problems with internal control, documentation, and other management issues, 27 of the findings reported involved management of cash resources. These findings included inadequate separation of duties, fraudulent check writing, and theft of cash. Such a systematic evaluation of audit results could be useful for program managers, helping them to understand commonly occurring problems, identify and monitor emerging issues, and address limitations in HUD’s overall monitoring and oversight processes. Such information could also be useful for HUD field offices, housing agency auditors, and the housing agencies themselves by alerting them to persistent or emerging problems of potential inappropriate use and mismanagement of public housing funds. HUD public housing managers in headquarters and field offices stated that summarized results of audits would aid in helping them carry out their oversight activities. Not effectively using readily available information on the results of audits will continue to hamper these managers’ efforts to monitor housing agencies for misuse and mismanagement of funds. PHAS and HUD’s internal analyses of data from housing agencies’ financial data could be better focused to identify housing agencies that are at greater risk of potential inappropriate uses or mismanagement of public housing funds. As a result, these oversight processes may not alert HUD to potential problems and allow for timely monitoring and additional oversight activities that may be warranted. PHAS primarily assesses data on management operations and the financial health and physical condition of the housing agencies’ public housing programs and alerts HUD to potentially troubled agencies. For example, as noted earlier, PHAS was developed to alert HUD to liquidity problems at housing agencies. However, PHAS was never intended to identify individual instances of inappropriate use of public housing funds, and some potential indicators of liquidity problems may not be detected by PHAS. As a result, housing agencies have continued to receive passing PHAS scores even when their financial data may indicate that the housing agency is at greater risk of inappropriately using funds or experiencing serious financial difficulties. HUD has stated that it would primarily rely on single audits to identify such problems. HUD’s policies state that the agency conducts analyses of financial data to help improve public housing agencies’ financial health and provide guidance in identifying possible fraud, waste, and abuse. However, HUD told us that it primarily used its analysis of housing agencies’ financial statements to ensure the mathematical reasonableness and completeness of the financial data used to calculate housing agencies’ PHAS scores. However, we analyzed housing agencies’ financial data and found that many of these agencies showed signs that they may be at greater risk of inappropriately using or mismanaging public housing funds, even though the agencies in question received passing PHAS scores. Our analysis of data from housing agencies’ financial statements indicates that many housing agencies receive passing PHAS scores even though an analysis of their financial data indicates that these agencies are at greater risk of inappropriately advancing their public housing program’s operating funds to other programs or affiliated entities that may not use the funds for public housing purposes. Both we and OIG found that financial data could be used to identify housing agencies that were potentially at greater risk of inappropriately advancing funds in this manner. Specifically, using financial data, we identified 837 housing agencies that reported balances in their public housing program of over $100,000 as due from other programs—that is, operating funds that were advanced or loaned to other programs or entities and were potentially not used for public housing purposes—between 2002 and 2006. The prior HUD OIG work noted that housing agencies showing balances in excess of $100,000 as due from other programs in their public housing program often inappropriately advanced or loaned public housing program funds to other programs or affiliated entities—such as nonprofit organizations—without these funds being repaid to the public housing program. For example, OIG noted instances where public housing funds were advanced inappropriately and used for private housing development. In response to these OIG audits, HUD reported that it had taken steps to resolve audit recommendations and had made referrals for administrative action to be taken against those housing agencies. HUD further recognized that housing agencies showing large amounts in their due from other funds accounts may warrant greater scrutiny and analyses of transactions. Our analysis of housing agencies’ due from other program accounts shows that from fiscal years 2002 through 2006, about 15 to 17 percent of housing agencies exhibited this indicator of possible inappropriate advances ($100,000 or more as due from other programs). (See fig. 2.) While these housing agencies’ financial data were showing that these agencies were at greater risk of inappropriately advancing operating funds, their PHAS scores may not have provided HUD with any indication of this potential risk. PHAS assesses the overall management operations and the financial and physical condition of the housing agencies’ public housing programs, but it was not developed to identify potential inappropriate use of funds, such as inappropriate advances of operating funds. Our analysis found that about 80 percent of the housing agencies that reported balances in excess of $100,000 in the operating fund’s due from other program from fiscal years 2002 through 2006 received passing scores in PHAS (see fig. 3). Thus, the PHAS score by itself would not identify or trigger any further oversight of housing agencies that may be at risk of potential improper advancement of funds. HUD officials stated that they would continue to rely on single audits to identify improper advances of this nature despite concerns about the quality of single audits and gaps in the number of housing agencies and programs covered and that they do not conduct specific financial analyses to identify housing agencies at risk of inappropriate advances of their operating funds. However, HUD does perform analyses of financial data to help it identify housing agencies that are at risk of improper advances for a HUD rental housing program—the Housing Choice Voucher program. Specifically, HUD conducts an assessment of housing agencies’ financial data to identify housing agencies reporting transfers of voucher program dollars that warrant additional oversight. Analyzing operating fund financial data as it has with the Housing Choice Voucher program illustrates how the department could leverage opportunities to identify and monitor housing agencies at greater risk of inappropriate advances of public housing funds. Our analysis also indicated that some housing agencies showed signs that they are at greater risk of mismanaging public housing funds. For example, we found that some housing agencies reported check overdrafts in their financial data. Our analysis of housing agencies’ financial data for fiscal years 2002 through 2006 showed 200 housing agencies reporting average bank overdrafts of $25,000 or more, and 10 housing agencies reporting average bank overdrafts of more than $1 million. The vast majority of housing agencies do not show this indicator of potential funds mismanagement. In fact, over 92 percent of all housing agencies reported no check overdrafts during this period. According to HUD’s OIG, writing checks in excess of funds available in a housing agency’s bank account is of concern and could indicate serious cash and financial management problems or indicate that the housing agency is prone to potential fraud. MDHA provides an example of how check overdrafts could have been used as an indicator of serious liquidity problems. Specifically, HUD audits noted that MDHA was consistently holding checks in a safe after it had written them and reported the millions of dollars in these checks as bank overdrafts in the housing agency’s financial data. HUD officials overseeing the receivership of MDHA stated that the housing agency was taking this action because it did not have the funds available to meet its financial obligations. MDHA’s single audits for this period were later determined to be substandard, in part because they did not identify an accounting misclassification of cash that may have masked the agency’s liquidity problems. HUD officials stated that housing agencies’ PHAS and FASS scores were used to alert the agency to potential issues with finances and liquidity. However, we found that housing agencies’ PHAS and FASS scores did not reflect the large bank overdrafts that we identified. Of the 10 housing agencies reporting bank overdrafts in excess of $1 million, 7 (including MDHA) received passing PHAS scores. Only 1 had a PHAS score indicating that the agency was troubled. Further, MDHA received a passing FASS score for 3 of 5 years included in our analysis. The remaining 2 agencies were exempted from reporting PHAS scores because of their participation under the Moving to Work program. As shown in figure 4, when we looked at the 200 housing agencies reporting an average bank overdraft of $25,000 or more from 2002 through 2006, we found that nearly 75 percent of these housing agencies received passing PHAS and FASS scores. HUD officials stated that they were concerned about housing agencies with substantial bank overdrafts but added that overdrafts were not flagged for further monitoring because PHAS and single audits were expected to detect such poor financial management. However, as we have seen, single audits, PHAS, and the department’s current approach to analyzing financial data do not always detect or alert HUD to housing agencies at greater risk of potential inappropriate use or mismanagement of public housing funds. This shortcoming underscores the importance of leveraging the financial data HUD collects to focus on housing agencies at risk of these potential problems. For example, HUD could use financial indicators to identify housing agencies that are at greater risk of such problems. HUD could then use the information to alert its field offices, auditors, and the housing agencies themselves to potential problems. Developing such indicators of potential inappropriate use and mismanagement could also be a particularly useful compensating mechanism for monitoring housing agencies that are not subject to single audits and a valuable monitoring tool to mitigate the limitations of PHAS. Not fully leveraging the information it already has limits HUD’s ability to identify potential waste and abuse of its resources. HUD relies on single audits to identify potential misuse and mismanagement of public housing funds, and many single audits as well as those of OIG have identified such problems. However, concerns about the quality of single audits and gaps in the coverage of these audits have limited HUD’s confidence that single audits will consistently identify inappropriate use and mismanagement of public housing funds at every housing agency. Although HUD has faced difficulty in making improvements to the quality of audits and is aware that many housing agencies’ public housing programs may not receive annual single audit coverage, it has not adapted the way it uses information it collects to develop mechanisms to mitigate these limitations. Specifically, HUD does not systematically summarize and analyze the types and causes of misuse and mismanagement that single and OIG audits do successfully identify. Understanding the attributes of these commonly occurring problems that single and OIG audits have found could be useful to public housing program managers in identifying emerging issues and evaluating HUD’s overall monitoring and oversight processes. Promising practices identified by the Domestic Working Group’s Guide to Opportunities for Improving Grant Accountability call for such efforts. In fact, our review of over 130 audits conducted between 2002 and 2007 showed almost 100 findings related to inappropriate advancement of public housing operating funds to non-HUD programs or entities. Summarized results could also be useful to HUD’s field offices, auditors, and the housing agencies themselves, helping them understand emerging and persistent issues in HUD’s national portfolio of housing agencies and carry out their responsibilities in monitoring public housing programs. HUD currently uses PHAS to monitor housing agencies, but this system has not always identified housing agencies at risk of problems such as cash management issues and was not intended to identify inappropriate use of public housing funds. Although HUD has developed automated checks of housing agencies’ financial information to help ensure its completeness and accuracy for PHAS, the department has not used these checks as a mechanism to identify housing agencies at greater risk for potential misuse and mismanagement of public housing funds. HUD itself uses such checks to identify potential inappropriate use of its Housing Choice Voucher Program funds. Fully utilizing data HUD collects—for example, by developing financial indicators to help identify housing agencies at greater risk of inappropriately using or mismanaging public housing funds—would help in developing tools to compensate for the limitations of key oversight processes. In order to strengthen its oversight of housing agencies administering the public housing program and better leverage information that it already collects, we recommend that the Secretary of the Department of Housing and Urban Development: Regularly summarize and systematically evaluate the results of OIG and single audits of public housing agencies to allow program managers to identify and understand problems of potential inappropriate use and mismanagement of public housing funds, identify emerging issues, and evaluate overall monitoring and oversight processes. Summarized results of audits should be disseminated to field offices, housing agencies, and their auditors to help make them aware of emerging or persistent problems and assist them in monitoring and administering HUD’s public housing programs. Develop mechanisms—such as financial indicators—to identify housing agencies that are at greater risk of inappropriately using or mismanaging public housing funds. Such mechanisms may be developed based on the department’s evaluation of commonly occurring and emerging issues identified in OIG and single audits of housing agencies and developed by leveraging financial information that the department currently collects. Once such indicators are developed, the department should use them as part of its ongoing monitoring and review of housing agencies’ use of public housing funds. In written comments from the General Deputy Assistant Secretary for Public and Indian Housing (see app. II), HUD stated that the draft report contains useful information, but that the agency did not believe that our recommendations would achieve the objectives of strengthening oversight of housing agencies that administer the public housing program and of better leveraging information that it already collects. Although it provided no specific explanation for why it thought our recommendations would not achieve its objectives, HUD stated that more analysis of its existing oversight mechanisms and information collected from housing agencies should be performed in order to evaluate and develop possible alternatives and indicated that it would involve Public and Indian Housing parties in developing a more creative and comprehensive approach to addressing the issues raised in our report. We welcome efforts by HUD to reconsider the mechanisms and data it uses to oversee housing agencies and to identify opportunities for improving its oversight. However, we disagree with HUD’s statement that our recommendations do not help achieve the objectives of strengthening its oversight and better leveraging the audit and financial information it already has. We believe the recommended actions present a reasonable first step in leveraging its existing information and permit HUD to better focus its limited resources. In fact, as noted in the report, the mechanisms that we are recommending for identifying potential problems to identify housing agencies that are at risk of inappropriate use of funds are already being used by another key HUD rental assistance program and by the HUD OIG. Moreover, HUD has in place systems that capture the information needed for such analysis. As HUD reevaluates its oversight mechanisms and seeks the input of its new Assistant Secretary, we believe that thinking about ways of leveraging the information it receives from single audits and financial data will be helpful in identifying emerging issues related to inappropriate use and mismanagement of funds and in providing the oversight necessary to address these issues. We will send copies of this report to the Secretary of the Department of Housing and Urban Development and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff has any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. GAO contact information and staff acknowledgments are listed in appendix III. This report evaluates the Department of Housing and Urban Development’s (HUD) oversight of public housing agencies’ use of federal funds to operate, modernize, and develop public housing units through the Operating Fund, Capital Fund, and HOPE VI programs—the three main sources of funding for the public housing program. In 2008, the Operating Fund, Capital Fund, and HOPE VI programs provided approximately $6.7 billion to housing agencies for capital asset management. The objective of this report was to examine the oversight processes HUD uses to understand and detect instances of inappropriate use and mismanagement of public housing funds. We reviewed applicable federal laws and regulations to describe permissible uses for the public housing funds within the scope of this report—specifically, funds available under the Operating Fund, Capital Fund, and HOPE VI programs. To describe HUD processes to ensure that housing agencies use public housing funds for statutorily allowable uses, we reviewed pertinent federal laws, agency notices, and program guides, and other job aids or documents. We interviewed program officials in the Office of Public and Indian Housing, including administrators from the Offices of the Deputy Secretary for Field Operations and of the Deputy Secretary for Public Housing Investments in Washington, D.C. We obtained information on the Public Housing Assessment System (PHAS) and financial management assessment procedures from officials responsible for this system at HUD’s Real Estate Assessment Center (REAC). In addition, we reviewed key documents that describe the annual single audit process as applicable to housing agencies that receive federal funds, such as the Office of Management and Budget’s (OMB) Circular A- 133: Audits of States, Local Governments, and Non-Profit Organizations and HUD program documents. We also interviewed officials from four HUD field offices and obtained additional documentary and testimonial information to summarize the oversight processes. We selected field offices for review. We selected the HUD field office in Miami-Dade because it oversaw the Miami-Dade Housing Authority, which had been the most recent housing agency to go into receivership pursuant to a settlement agreement resolving allegations of financial mismanagement. We also met with officials from three other HUD field offices: Baltimore, San Francisco, and Washington, D.C. Within their portfolios, these field offices all have housing agencies with active Operating Fund, Capital Fund, and HOPE VI programs, with a range of housing agencies of different sizes, and in proximity to cities near GAO offices. Although the results of our discussion with these field offices may not be generalized across all field offices, our discussions provided important context on HUD’s implementation of its oversight processes and corroboration of information we collected. We also obtained information on HUD’s oversight procedures with staff from selected public housing agencies in each of the HUD field office locations: the Housing Authority of the City of Alameda, Housing Authority of the County of Monterey, Richmond Housing Authority, and San Francisco Housing Authority in California; the Miami-Dade Housing Authority and Housing Authority of the City of Fort Lauderdale in Florida; the District of Columbia Housing Authority; and the Housing Authority of Baltimore City and Prince George’s County Housing Authority in Maryland. To understand commonly occurring reported findings of inappropriate use or mismanagement of public housing funds by housing agencies, we conducted a content analysis of findings in HUD’s Office of Inspector General (OIG) audit and single audit reports of housing agencies. Through this analysis we developed a number of categories and subcategories of common findings reported in these audits, including instances of noncompliance and internal control deficiencies. We took steps to ensure that the categories and subcategories we developed were consistently applied across both OIG and single audit reports, which included independent verification that the established categories and subcategories in the OIG reports were applicable to the findings we analyzed in the single audit reports. To select OIG audits for our analysis, we obtained a list of all audit reports with findings related to the three public housing funds (Operating Fund, Capital Fund, and HOPE VI) from 2002 through 2007. This list contained 144 audit reports. We also determine whether these audits met the following additional criteria for inclusion in our content analysis sample: (1) The audit findings were related to inappropriate use of funds and mismanagement issues, and (2) the audits were initiated by OIG or entities other than HUD program officials. We identified 81 OIG audits that met these criteria and constituted our final sample. Single audit reports for our analysis were selected from audits conducted from audit years 2002 through 2005. This period differs from one for our analysis of OIG audit report because single audit reports are not due to HUD or to the Federal Audit Clearinghouse—the source from which we drew our sample—until 9 months after a housing agency’s fiscal year end. As a result, some housing agencies had not yet submitted their 2006 or 2007 single audit reports by the time we performed our analysis. Using HUD financial data on housing agencies, we created a list of 129 single audit reports with findings related to the three public housing funds from audit years 2002 through 2005. We ensured that these single audit and OIG reports related to financial management issues within the scope of our review. Our final sample of 56 reports included 8 reports in 2002, 10 in 2003, 18 in 2004, and 20 in 2005. To identify potential cases of housing agencies at greater risk of inappropriately using or mismanaging funds across the HUD housing agency portfolio, we reviewed financial information from REAC’s financial data schedule (FDS) database for approximately 3,300 housing agencies. We reviewed HUD’s Financial Data Schedule Line Definition and Crosswalk Guide on the data fields available in this system and analyzed certain data line items. Specifically, we identified those FDS line items that could be used to assess potential areas of inappropriate use and financial mismanagement. In particular, we looked at FDS information on advances of public housing funds as a potential indicator of improper use and on cash management (bank overdrafts) as a potential indicator of financial mismanagement. At the time that we obtained these data from HUD, fiscal year 2006 was the last year for which a full year of financial data schedules was available. We assessed the reliability of the data by (1) reviewing existing information about the systems and the data, (2) interviewing agency officials knowledgeable about the data, and (3) examining data elements used in our work by conducting electronic edit checks and comparing actual with anticipated values. For the FDS data, we analyzed the most current available data from the system for the years within the scope of our review. We obtained explanations on inconsistencies we found in the data from agency officials. We determined that the data were sufficiently reliable for the purposes of this report. Analysis of potential inappropriate use of funds: As a potential indicator of inappropriate use, such as improper advances of funds, we analyzed balances of funds due from other program (FDS line item 144) for all housing agencies’ operating funds from fiscal years 2002 through 2006. We selected this line item because HUD’s OIG has identified housing agencies that have inappropriately advanced public housing operating funds by analyzing FDS for housing agencies that reported balances in excess of $100,000 on line item 144 for the operating fund. OIG found that housing agencies often used the public housing general fund bank account as the payment account for various other activities of the housing agency. When these other activities did not repay the operating fund account, a balance was reported on line 144 of the FDS. When housing agencies did this for activities requiring substantial amounts of cash, the balance in line 144 tended to grow, indicating an inappropriate use of funds. Similarly, we identified housing agencies showing amounts due from other program balances in excess of $100,000 for its operating fund as an indicator of potential inappropriate advances of funds. Analysis of potential mismanagement of funds: To identify housing agencies that may have fund mismanagement problems, such as poor cash management, we analyzed amounts reported as bank overdrafts in FDS (line item 311). This line item on FDS represents checks written in excess of funds available in bank accounts. In identifying this line item as an indicator of potential fund mismanagement, we interviewed HUD officials to determine if there were sound or legitimate operational reasons for a housing agency writing checks in excess of funds available in a housing agency’s bank account. HUD officials could not provide such reasons. Further, OIG agreed that using bank overdrafts as an indicator of potential funds mismanagement was reasonable. We conducted this performance audit from January 2007 through March 2009 in Alameda, Monterey, Richmond, and San Francisco, California; Miami and Fort Lauderdale, Florida; Baltimore and Prince George’s County, Maryland; and Washington, D.C. in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. In addition to the contact named above, Daniel Garcia-Diaz (Assistant Director), Emily Chalmers, May Lee, John Lord, Marc Molino, Jasminee Persaud, Carl Ramirez, Linda Rego, Suneeti Shah, and Julie Trinder made key contributions to this report. | The Department of Housing and Urban Development (HUD) provided over $6.7 billion in fiscal year 2008 to housing agencies to operate, modernize, and develop about 1.2 million public housing units. It is important that HUD exercise sufficient oversight of housing agencies to help ensure that public housing funds are being used as intended and properly managed. In this report, GAO examines HUD's oversight processes for detecting housing agencies at risk of inappropriate use and mismanagement of public housing funds. GAO analyzed HUD financial data on about 3,300 housing agencies, compared HUD's oversight policies with program and agency objectives, and interviewed agency officials. Key HUD oversight processes could be more focused on identifying potential inappropriate use or mismanagement of public housing funds. HUD primarily relies on single audits to identify such problems, although HUD, its Office of Inspector General (OIG), and the President's Council on Integrity and Efficiency (now known as the Council of the Inspectors General on Integrity and Efficiency) have identified weaknesses with some audits. Further, even when these audits do identify issues, HUD does not systematically summarize audit findings to identify and understand emerging and persistent issues to better monitor housing agencies for inappropriate use and mismanagement of public housing funds. Understanding these problems could be useful for identifying housing agencies that are at greater risk of inappropriately using or mismanaging public housing funds. HUD uses the Public Housing Assessment System (PHAS) to monitor and rate the overall condition and financial health of public housing agencies. However, PHAS is not intended to identify inappropriate uses of public housing funds and is limited in its ability to detect potential mismanagement. HUD also analyzes the financial data of public housing agencies, but its review focuses on the accuracy and completeness of the information used to calculate PHAS scores. GAO analyzed financial data from the housing agencies and found many housing agencies showed indicators that they were at risk of potential inappropriate use and mismanagement of public housing funds--while most received passing PHAS scores. For example, GAO found that from 2002 to 2006, 200 housing agencies had written checks for more than the funds available in their bank accounts (bank overdrafts) on average of $25,000 or more. However, 75 percent of these agencies received passing PHAS scores. Such overdrafts raise questions about these agencies' cash management. But HUD does not use these and similar measures to identify housing agencies at greater risk of inappropriately using or mismanaging public housing funds. Without fully leveraging the audit and financial information it collects, the department limits its ability to identify housing agencies that are at greater risk of inappropriately using or mismanaging program funds. |
The Federal Aviation Administration’s (FAA) primary mission is to ensure safe, orderly, and efficient air travel in the national airspace. FAA’s ability to fulfill this mission depends on the adequacy and reliability of the nation’s air traffic control (ATC) system, a vast network of computer hardware, software, and communications equipment. Sustained growth in air traffic and aging equipment have strained the current ATC system, limiting the efficiency of ATC operations. This pattern is likely to continue as the number of passengers traveling on U.S. airlines is expected to grow from about 580 million in 1995 to nearly 800 million by 2003, an increase of 38 percent. To address these trends, in 1981 FAA embarked on an ambitious ATC modernization program. FAA estimates that it will spend about $20 billion to replace and modernize ATC systems between 1982 and 2003. Our work over the years has chronicled many FAA failures in meeting ATC projects’ cost, schedule, and performance goals. As a result, we designated FAA’s ATC modernization as a high-risk information technology initiative in our 1995 report series on high-risk programs. The ATC system of the late 1970s was a blend of several generations of automated and manual equipment, much of it labor-intensive and obsolete. In addition, FAA forecasted increased future demand for air travel brought on by airline deregulation of the late 1970s. At that time, FAA recognized that it could increase ATC operating efficiency by increasing automation. It also anticipated that meeting the demand safely and efficiently would require improved and expanded services, additional facilities and equipment, improved work force productivity, and the orderly replacement of aging equipment. Accordingly, in December 1981, FAA initiated its plan to modernize, automate, and consolidate the existing ATC system by the year 2000. This ambitious modernization program includes the acquisition of new radars and automated data processing, navigation, and communication equipment in addition to new facilities and support equipment. The modernization, including new systems, facility upgrades, and support equipment is now estimated to cost over $34 billion through the year 2003. The Congress will have provided FAA with approximately $23 billion of the $34 billion through fiscal year 1997. The ATC systems portion alone, excluding facility upgrades and support equipment, totals over $20 billion of the planned $34 billion investment. The $20 billion will provide, in total, about 170 new systems, but additional systems are being planned through the year 2015. The modernization is still far from complete as nearly $6 billion of the $20 billion still remains to be spent after 1997 on portions of 73 systems. Automated information processing and display, communication, navigation, surveillance, and weather resources permit air traffic controllers to view key information, such as aircraft location, aircraft flight plans, and prevailing weather conditions, and to communicate with pilots. These resources reside at, or are associated with, several ATC facilities—the Air Traffic Control System Command Center (ATCSCC), flight service stations, air traffic control towers, terminal radar approach control (TRACON) facilities, and air route traffic control centers (en route centers). These facilities’ ATC functions are described below. The ATCSCC in Herndon, Virginia, coordinates operations between the en route centers by combining traffic flow information from each. This information allows the ATCSCC to provide a snapshot of the traffic flows across the United States that is in turn used to ensure that airports do not exceed capacities. About 90 flight service stations provide pre-flight and in-flight services, such as flight plan filing and weather report updates, primarily for general aviation aircraft. Airport towers control aircraft on the ground, before landing, and after take-off when they are within about 5 nautical miles of the airport, and up to 3,000 feet above the airport. Air traffic controllers rely on a combination of technology and visual surveillance to direct aircraft departures and approaches, maintain safe distances between aircraft, and communicate weather-related information, clearances, and other instructions to pilots and other personnel. Approximately 180 TRACONs sequence and separate aircraft as they approach and leave busy airports, beginning about 5 nautical miles and ending about 50 nautical miles from the airport, and generally up to 10,000 feet above the ground, where en route centers’ control begins. Twenty en route centers control planes over the continental United States in transit and during approaches to some airports. Each en route center handles a different region of airspace, passing control from one to another as respective borders are reached until the aircraft reaches TRACON airspace. Most of the en route centers’ controlled airspace extends above 18,000 feet for commercial aircraft. En route centers also handle lower altitudes when dealing directly with a tower, or when agreed upon with a TRACON. Two en route centers—Oakland and New York—also control aircraft over the ocean. Controlling aircraft over oceans is radically different from controlling aircraft over land because radar surveillance only extends 175 to 225 miles offshore. Beyond the radars’ sight, controllers must rely on periodic radio communications through a third party—Aeronautical Radio Incorporated (ARINC), a private organization funded by the airlines and FAA to operate radio stations—to determine aircraft locations. See figure 1.1 for a visual summary of the ATC facilities that control aircraft. The ability of FAA’s systems to interoperate, both within and across facilities, as one integrated system of systems is essential to ATC operations. Each of the five facilities highlighted above contain numerous interrelated systems. For example, the en route centers alone rely on over 50 systems to perform mission-critical information processing and display, navigation, surveillance, communications, and weather functions. Examples include the systems that display aircraft situation data for air traffic controllers, the system that collects and displays data from various weather sources, radars for aircraft surveillance, radars for wind and precipitation detection, ground-to-ground and ground-to-air communications systems, and systems to back-up primary systems. In addition, systems from different facilities also interact with each other so that together they can successfully execute the total ATC process. For example, controllers’ displays currently integrate data on aircraft position from surveillance radars with data on flight destination from flight planning data systems. The ability of these systems to interoperate and continually exchange data in real-time is safety critical. Figure 1.2 depicts the five key air traffic control facilities (left section), the interaction between systems both within and between facilities (middle section), and the complexity of the systems associated with just one type of facility—the en route centers (right section—these systems are described in appendix I). Over the past 15 years, FAA’s modernization program has experienced substantial cost overruns, lengthy schedule delays, and significant performance shortfalls. To illustrate, the long-time centerpiece of this modernization program—the Advanced Automation System (AAS)—was restructured in 1994 after estimated costs tripled from $2.5 billion to $7.6 billion and delays in putting significantly less-than-promised system capabilities into operation were expected to run 8 years or more. Similarly, increases in costs for three other ATC projects have ranged from 51 to 511 percent, and schedule delays have averaged almost 4 years. For example, the per-unit cost estimate for the Voice Switching and Control System increased 511 percent, and the first site implementation was delayed 6 years from the original estimate. Shortfalls in performance have affected AAS, as well as other projects. For example, the critical Initial Sector Suite System component of AAS, which was intended to replace controllers’ workstations at en route centers, faced so many technical problems that it was severely scaled back. In addition, difficulties in developing the Air Route Surveillance Radar-4 software and integrating it with other ATC systems delayed its implementation for years. GAO’s work over the years has highlighted weaknesses in FAA’s management of the modernization that have caused cost, schedule, and performance problems. First, FAA did not historically manage its acquisition of major systems in accordance with Office of Management and Budget Circular A-109 and its own acquisition policies. For example, FAA did not analyze its mission needs, did not adequately specify ATC systems requirements, and performed flawed or limited analyses of alternatives for achieving those needs. This is contrary to our finding that successful public and private organizations tie decisions on information technology investments to explicit and quantifiable mission improvements. Second, some systems did not meet agency specifications. Finally, FAA has provided inadequate oversight of contractor performance. Additionally, GAO recently reported that FAA’s organizational culture has been an underlying cause of the agency’s acquisition problems, encouraging employee behavior that did not reflect a strong commitment to mission focus, accountability, coordination, and adaptability. Because of the past problems with FAA modernization efforts, the Congress enacted legislation in October 1995 that directed FAA to design and implement a new acquisition management system. The Act directed the FAA to develop and implement an acquisition system that would address the unique needs of the agency. At a minimum, the system was to provide for more timely and cost-effective acquisitions. To help achieve this goal, the Act exempted FAA from most federal procurement and personnel laws and regulations. On April 1, 1996, in response to the act, the FAA Administrator began implementation of FAA’s new system. The new acquisition management system is intended to improve coordination and mission focus by strengthening the “front-end” of the acquisition process. Specifically, the developers and operators are expected to work together to analyze mission needs and alternatives before senior management makes capital investment decisions and assigns projects to development teams. Two major FAA organizations play key roles in the development and evolution of ATC systems—the Office of the Associate Administrator for Research and Acquisitions (ARA) and the Office of the Associate Administrator for Air Traffic Services (ATS). Briefly, ARA is responsible for developing and fielding ATC systems, while ATS is responsible for operating, maintaining, and enhancing ATC systems. Cross-functional integrated product teams (IPT) residing in ARA are responsible for ATC systems development. ARA manages the research, development, and acquisition of modernization projects. According to the Associate Administrator for ARA, only one-half of the total systems development budget is spent by ARA, while the other one-half is spent by ATS implementing system enhancements. Within ARA, two groups are responsible for acquiring systems, while the others handle cross-cutting management functions (e.g., budget formulation and program evaluation). These two groups are the Office of Systems Development (AUA) and the Office of Communication, Navigation, and Surveillance Systems (AND). Five IPTs reside in AUA and are organized by ATC business areas (i.e., en route, terminal, weather and flight service, air traffic management, oceanic). Five IPTs reside in AND and are organized by ATC functional areas (i.e., infrastructure, communications, surveillance, GPS/navigation, aircraft/avionics). IPTs are responsible for research, development, and acquisition as well as for ensuring that new equipment is delivered, installed, and working properly. For example, the en route IPT comprises product teams for the Display Channel Complex Rehost, the Display System Replacement, the Voice Switching and Control System, and several other en route systems. Each IPT includes systems and specialty engineers, logistics personnel, testing personnel, contract personnel, and lawyers as well as representatives from the organizations responsible for operating and maintaining the ATC equipment. The second major organization involved with ATC systems is ATS. ATS is responsible for directing, coordinating, controlling, and ensuring the safe and efficient utilization of the national airspace system. Organizations within ATS are responsible for planning, operating, maintaining, and enhancing ATC systems. Responsibility for managing projects is transferred from ARA to ATS once a system has been installed and is operational. The FAA Technical Center is the ATC system test and evaluation facility and supports ATC systems’ research, engineering, and development. See figure 1.3 for a visual summary of the ATC modernization management structure. The objectives of our review were to determine (1) whether FAA has a target architecture(s), and associated subarchitectures, to guide the development and evolution of its ATC systems, and (2) what, if any, architectural incompatibilities exist among systems and what is the effect of these architectural incompatibilities. To determine whether FAA has a target architecture(s), and associated subarchitectures, to guide the development and evolution of its ATC systems, we researched current literature and interviewed systems architecture experts to identify the key components of a complete systems architecture; analyzed FAA’s National Airspace System Architecture (versions 1.5 and 2.0) and interviewed officials responsible for developing this architecture to determine whether the proposed systems architecture is complete and comprehensive; reviewed additional FAA efforts to develop systems architectures, including the Corporate Systems Architecture; interviewed the 10 IPTs responsible for ATC systems development to determine how architectural considerations are incorporated in development efforts; reviewed the NAS System Requirements Specification (NAS-SR-1000), the NAS Level 1 Design Document (NAS-DD-1000), and the NAS System Specification (NAS-SS-1000) to determine whether existing guidance constitutes the components of a systems architecture; interviewed ARA organizations responsible for developing software, communications, data management, and security guidance about existing guidance and efforts to improve this guidance; interviewed FAA’s Chief Information Officer (CIO) to determine what role the CIO plays in the development of FAA’s systems architecture and whether this role is consistent with recently passed legislation; and analyzed FAA’s current structure and processes associated with architectural development and enforcement. To determine what, if any, architectural incompatibilities exist among systems and what is the effect of these architectural incompatibilities, we acquired and analyzed information on the hardware, operating systems, application languages, database management, communications, and security characteristics of seven existing and under development ATC systems to identify architectural incompatibilities; reviewed key technical documents associated with some of these systems, including interface control documents and technical briefings; analyzed the cost, schedule, and performance impacts of the architectural incompatibilities that exist among ATC systems; interviewed the Director of Operational Support to obtain ATC maintenance concerns and to obtain his opinion about system incompatibilities; and identified the application languages used in 54 operational ATC systems. We performed our work at the Federal Aviation Administration in Washington D.C., and the FAA Technical Center in Atlantic City, New Jersey, from March 1996 through January 1997. Our work was performed in accordance with generally accepted government auditing standards. Department of Transportation (DOT) and FAA officials, including the FAA Deputy Director for Architecture and System Engineering, the FAA Chief Scientist for Software Engineering, and the FAA Chief Engineer for Air Traffic Systems Development, provided oral comments on a draft of this report. Their comments have been addressed in the Agency Comments and Our Evaluation sections at the end of chapters 3 and 4 and as appropriate in the body of the report. Over the last decade, as computer-based systems have become larger and more complex, the importance of and reliance on systems architectures has grown steadily. These comprehensive “construction plans” or “blueprints” systematically detail the full breadth and depth of an organization’s mission-based modus operandi, first in logical terms, such as defining business functions, providing high-level descriptions of information systems and their interrelationships, and specifying information flows; and second in technical terms, such as specifying hardware, software, data, communications, security, and performance characteristics. Without a systems architecture to guide and constrain a modernization program, there is no systematic way to preclude inconsistent system design and development decisions, and the resulting suboptimal performance and added cost associated with these incompatible systems. This is why leading public and private sector organizations strongly endorse defining and enforcing systems architectures as an integral and vital aspect of modernizing their information systems. We found that leading organizations in the private sector and in government use systems architectures to guide mission-critical systems development and to ensure the appropriate integration of information systems through common standards. In addition, experts in academia have also championed the systems architecture approach. For example, the Software Engineering Institute (SEI) at Carnegie Mellon University includes the development and evolution of a systems architecture as a key process area in its Systems Engineering Capability Maturity Model (SE-CMM). The SE-CMM states that the systems architecture should detail both logical and technical system elements, their relationships, interfaces, and system requirements, and should guide the system design and implementation. Congress has also recognized the importance of systems architectures as a means to improve the efficiency and effectiveness of federal information systems by enacting the 1996 Clinger-Cohen Act. The act, among other provisions, requires that department-level CIOs develop, maintain, and facilitate integrated systems architectures. Reflecting the general consensus in the industry that large, complex systems development efforts should be guided by explicit architectures, in 1992, GAO issued a report defining a comprehensive framework for designing and developing systems architectures. This framework divides systems architectures into two principal components—a logical component and a technical component. The logical component is essential to ensure that an agency’s information systems support accomplishing a specific mission(s), while the technical component provides the detailed guidance needed to develop and evolve these systems. At the logical level, the architecture includes a high-level description of the organization’s mission, functional requirements, information requirements, systems, information flows among systems, and interfaces between systems. The logical architecture is derived from a strategic information systems planning process that clearly defines the organization’s current and future missions and concepts of operations. It then defines the business functions required to carry out the mission and the information needed to perform the functions. Finally, it describes the systems that produce the information. An essential element of the logical architecture is the definition of the component interdependencies (i.e., information flows, system interfaces). Once the logical architecture is defined, an organization knows its portfolio of desired systems and has a clear understanding of how these systems will collectively carry out the organization’s objectives. The purpose of the logical architecture is to ensure that the systems meet the business needs of the organization. The technical level details specific information technology and communications standards and approaches that will be used to build systems, including those that address critical hardware, software, communications, data management, security, and performance characteristics. The purpose of the technical architecture is to ensure that systems are interoperable, function together efficiently, and are cost-effective over their life cycles (i.e., including maintenance costs). Figure 2.1 displays the key logical and technical components of a systems architecture. (e.g., expandability, reliability, maintainability, fault tolerance) (e.g., reliability, testability, flexibility, maintainability, portability, reusability, adherence to open systems standards, standards for the languages to be used, institutionalized process standards or methodologies for designing, coding, testing, and documenting software projects) (e.g., reliability, availability, standards for communications protocols) (e.g., standards for data formats and naming conventions, a data dictionary) (e.g., hardware and software solutions to address security requirements that are based on a security policy and security concept of operations) (e.g., ability to meet operational requirements, response-time requirements, availability, reliability) FAA lacks a complete systems architecture to guide the development and evolution of its ATC systems modernization. While FAA has made good progress over the last 2 years in defining a logical ATC systems architecture, FAA has not adequately addressed its need for a technical ATC systems architecture. The lack of an ATC systemwide technical architecture has caused, and will continue to cause, incompatibilities among the ATC systems, such as differences in communications protocols and application languages, that require additional development, integration, and maintenance resources to overcome. The incompatibilities also make it difficult to share application software among systems and to migrate to vendor-independent operating environments, thereby effectively foreclosing two opportunities to reduce system development and maintenance costs. FAA is currently defining a logical ATC systems architecture that describes FAA’s concept of operations, business functions, high-level descriptions of information systems and their interrelationships, and information flows among systems. This high-level systems blueprint provides a roadmap that is to guide ATC systems over the next 20 years. FAA is defining a comprehensive and evolutionary logical ATC systems architecture in its National Airspace System (NAS) architecture. Among other things, the architecture provides a description of the future aviation, air traffic management, and air navigation system in terms of services, functions, and ATC systems. Specifically, it describes FAA’s concepts of operations, requirements in terms of the business functions to be performed, associated systems to be used, the relationships between these functions and systems, the information needed to perform these functions, and the flow of information among the functions and systems. In addition, it provides a roadmap for evolving the ATC systems through the year 2015. The goals of the logical architecture are to (1) provide the aviation community with a cohesive and collaborative means to influence the NAS evolution, (2) provide a foundation for FAA acquisition decisions, and (3) provide the aviation community with insight into the timing of major changes to NAS. According to FAA, the NAS architecture is intended to eliminate “stovepiped” development by defining an evolution towards target architectures that represent coordinated and integrated operational concepts and a comprehensive system of systems view. The NAS architecture is not intended to provide the details needed to actually design and build these systems (i.e., details that would be provided by the technical architecture). FAA issued version 2.0 of the NAS architecture in October 1996 and subsequently released it for industry and government comment. The first complete version of the architecture is scheduled to be completed in December 1997. The NAS architecture is divided into five key parts—concepts of operations, service and functional requirements, systems and programs, roadmap, and issues. Each is briefly discussed below. Concepts of Operations: This section describes an evolving series of concepts of operations through the year 2015 and emphasizes the migration to a free-flight environment. The current concept of operations relies on analog voice communications between controllers and pilots, and ground-based radar surveillance to control aircraft. A free-flight environment is one in which the pilots are free to select their own routes and speed in real time. In this environment, air traffic restrictions would be imposed only to ensure minimum aircraft separation, preclude exceeding airport capacity, prevent unauthorized flight through special-use airspace, and ensure safety. A free-flight environment relies less on voice communications and ground-based radar systems and more on aircraft position displays in the cockpit and satellite surveillance and navigation technologies, such as the Global Positioning System (GPS). The transition from the current to the free-flight concept of operations will be evolutionary. Mid-term concepts of operations will define FAA’s evolution methodically and gradually to a free-flight environment. Service and Functional Requirements: This section describes services and associated functional requirements that are required to carry out the concepts of operations. The service requirements include air traffic (i.e., flight planning, flight support, aircraft navigation and guidance, traffic management, separation, data information management, and communication management), airport, security, safety, certification, infrastructure, and administrative and acquisition support services. These service requirements are further broken down by functional requirements (e.g., provide forecasted weather information, provide air traffic flow information) and specify existing systems and describe future NAS systems (e.g., Host Computer System and Display System Replacement, respectively). Systems and Programs: This section describes the systems associated with each functional area (i.e., communications, weather, automation, surveillance, maintenance and support, navigation) and business area (i.e., en route, terminal, tower, oceanic, air traffic management) of the proposed architecture. For each of the functional areas the following information is provided: (1) listing of systems through the year 2015, (2) current programs and schedules from the Capital Investment Plan (CIP) and the Research, Engineering, and Development (RE&D) plan, and (3) transition strategy and diagrams. In addition, this section provides systems drawings (i.e., high-level wiring diagrams) of the various business areas for the 1995 and 2005 time frames. Appendix I provides a simplified block diagram for the near- and mid-term en route business area’s systems environment, which is very complex (i.e., includes many systems that interact in many ways). Roadmap: This section provides a transition plan for replacing systems, replacing existing infrastructure, and introducing new capabilities. The NAS architecture roadmap presents a proposed architecture in several functional areas (e.g., navigation, surveillance) and describes a transition strategy to migrate from the current systems environment to the proposed architecture. For example, for the navigation functional area, it describes a gradual transition strategy to a satellite-based navigation system. Specifically, it describes the deployment schedule for the primary system, Wide Area Augmentation System (WAAS), the schedule for decommissioning existing systems, and the additional system deployment schedules to support the far-term concept of operations. The roadmap describes the changes in ATC systems through time as they evolve to support free-flight. Issues: This section provides a collection of papers on outstanding issues whose resolutions should have the greatest potential impacts on the future of ATC systems. These issues are the “forks in the road” where a decision is needed to define a particular roadmap to the future. For example, one paper presents a series of unanswered questions on performance and backup requirements for future ATC surveillance. Another recommends the need for a free-flight action plan that is to guide FAA’s transition to a free-flight environment. The interrelationships among the NAS architecture’s services, functional areas, and associated systems are quite complex, as any one system may support multiple functional and service areas. For example, the Host Computer System (HCS) is an automation system located in the en route facilities that supports several business areas ( i.e., the en route business area by processing data from many different radar systems and air traffic management (ATM) business area by providing flight track data to select ATM systems). Figure 3.1 shows the relationships between the NAS architecture air traffic services, business areas, functional areas, and related systems. FAA’s efforts to develop and evolve its “system of ATC systems” is not guided and constrained by an ATC-wide technical architecture, and FAA does not have an effective strategy for developing one. In 1995, FAA recognized the importance of such an architecture by including the development of an FAA corporate architecture in its 1996 Capital Investment Plan. However, FAA decided to drop this effort from FAA’s 1997 plan in favor of other investment priorities. As a result, the IPTs have been left to proceed individually in setting architectural standards and developing and evolving systems. This has resulted in three IPTs cooperatively developing similar but not identical architectures for their respective areas, while others are proceeding without one. At the same time, still other FAA organizations are independently attempting to develop pieces (e.g., software guidance, security guidance) of a technical architecture, but these efforts are not coordinated and neither individually nor collectively constitute a complete ATC-wide technical architecture. Without an ATC-wide technical architecture, FAA’s ATC systems have and will continue to suffer from costly and inefficient incompatibilities. The concept of a technical systems architecture is not new to FAA. In FAA’s January 1996 Capital Investment Plan, FAA planned to develop a technical architecture, called the corporate systems architecture (CSA). According to FAA plans, the CSA was to be a blueprint for achieving an open systems environment and was to be used to “guide, coordinate, and integrate the acquisition, development and implementation of automated data processing equipment, telecommunications, automated information systems and data bases, and associated support services” across FAA. However, the CSA effort was abandoned in favor of other funding priorities. FAA’s CIO, who was tasked to develop the CSA, told us that the CSA was not funded in 1996 because its sponsors and developers could not convince FAA top management of its importance in providing benefits like cheaper development, integration, and maintenance costs, and better systems performance. In the absence of an overall ATC technical systems architecture, the IPTs are left to their own devices in formulating guidance to build systems. As a result, three IPTs have cooperatively developed similar but not identical technical architectures. The other seven IPTs are developing ATC systems, which include such major systems as the Standard Terminal Automation Replacement System (STARS) and the Wide Area Augmentation System (WAAS), without a technical architecture. (See figure 3.2 for a summary of architectural guidance used by the 10 IPTs.) With respect to the latter seven, officials for one IPT could not cite any technical architectural guidance being used, while officials for another IPT cited the NAS architecture, and officials for the other five cited the NAS “1,000-series” documents. However, neither the NAS architecture nor the NAS “1,000 series” constitutes a technical architecture. The NAS architecture is a logical architecture that provides no technical details, and the NAS “1,000 series” documents are neither a logical nor technical architecture. In fact, the Deputy Director for the Office of System Architecture and Investment Analysis, stated that the NAS “1,000 series” documents are “shelfware” and not useful in guiding future systems development. In commenting on a draft of this report, Systems Architecture and Investment Analysis officials stated that they plan to issue a revision to the NAS “1,000 series” documents in October 1997. Each of the three IPTs using their own, cooperatively developed technical architectures are described below. ATM IPT: This IPT was the first to develop a technical architecture, which is called the ATM Domain Environment Definition Document. It provides guidelines and standards for, among other things, operating systems, communication protocols, data management, security, coding, and testing. ATM officials stated that they created this document to facilitate system integration and ATM software application migration among the systems they are developing, which include the Traffic Management System (TMS) and the Center TRACON Automation System (CTAS). En route IPT: This IPT’s architecture governs development of such systems as the Display System Replacement (DSR) and the Host Interface Device/Local Area Network (HID/LAN). The architecture contains a systems development model and a standards profile, including data interchange, communications, security, and programming language standards. Infrastructure IPT: This IPT’s architecture is for its NAS Infrastructure Management System (NIMS), which is this IPT’s primary system. The NIMS architecture includes both logical and technical components. It includes a standards profile that contains the same general categories of standards as the ATM and en route technical architectures. While the three IPTs tried to achieve architectural compatibility, they have not been fully successful. For example, all three architectures specify C and C++ as acceptable programming languages, but the en route architecture also specifies Ada as an acceptable language. Also, although the ATM, en route, and infrastructure architectures all specify compliance to the Structured Query Language (SQL)-92 to access data, the en route architecture acknowledges that the SQL-92 standard will have to be modified at times to meet FAA’s real-time, mission-critical requirements. Currently, FAA has no plan for doing this consistently across all three systems environments. Further, the ATM technical architecture specifies the ethernet protocol and the en route architecture specifies the Fiber Distributed Data Interface (FDDI) protocol. These two protocols are not compatible. FAA officials told us that they are aware of inconsistencies and that they plan to resolve them, but have not defined the plan, scheduled its implementation, or allocated resources for the effort. In addition to these IPT-specific technical architectures, three other ARA offices (i.e., Office of Systems Architecture and Investment Analysis, Office of Information Technology, Acquisition Policy Branch) have initiated efforts that relate to, but neither individually nor collectively constitute a complete technical architecture. These efforts have begun to address data management, security, and software process and product standards; however, they are limited in scope, are incomplete, and will not be mandated for use across all ATC systems. Each is discussed below. The Office of Systems Architecture and Investment Analysis is adding a draft section on data management to the logical architecture that describes the current state of data exchange between ATC systems. However, this section does not define specific standards (e.g., standards for data elements and naming conventions), and FAA officials have not established milestones for doing so. This office is also planning to develop guidance addressing how security controls (e.g., hardware and software solutions) will be implemented to satisfy security requirements. However, this effort has not been approved by FAA management, and therefore remains unfunded. Also, this office has created a menu of architectural standards (e.g., data management, data interchange, communication protocol, application development, and security standards) to increase IPTs’ awareness of what standards exist for the IPTs to use at their own discretion. The Office of Information Technology is initiating efforts to improve software acquisition processes, has trained the IPTs on software process improvement, and has established a Software Engineering Process Group to champion process improvement activities. However, these initiatives do not specify software product standards, such as standard programming languages and development tools, or standards for software structure, both of which are critical to modernizing ATC systems cost effectively. Moreover, FAA cannot yet demonstrate specific and measurable process improvements. The Acquisition Policy Branch has begun an initiative to develop systems engineering guidance for IPTs’ optional use. Because this guidance is early in its development and a complete draft does not yet exist, FAA would not provide us a copy for review. The lack of a complete systems architecture has produced architectural differences and incompatibilities among ATC systems, such as different communication protocols and proprietary operating environments, and will continue to do so for future systems. (Examples of these differences for key systems in the current and near-term en route environment are provided in appendix II.) Further, the significance of these incompatibilities will increase as FAA moves to a more networked systems environment. Overcoming these incompatibilities means “higher than need be” system development, integration, and maintenance costs, and reduced overall systems performance. Additionally, because many existing systems are largely proprietary, opportunities for application software reuse among systems is effectively precluded and options for migrating applications to new hardware and software platforms are restricted. A system interface is hardware and software that acts as an interpreter to interconnect different systems and allow for the exchange of data. The more similar the communications and data features of the systems that are to communicate, the less complicated this interface. Conversely, the more disparate the systems, the more complicated the interface. Communications and data management subarchitectures are essential to standardize communication protocols and data formats, respectively, so that system interfaces are less costly and easier to implement. As described in chapter 1, system interoperability in the ATC system of systems is essential for FAA to successfully perform its mission. However, fundamental differences in how the systems communicate have made exchanging data between systems more difficult and expensive because it requires the development and maintenance of costly interfaces to interconnect systems. This can be seen in the en route business area, where a system known as the Peripheral Adapter Module Replacement Item (PAMRI) operates as a collection of systems interfaces. Specifically, PAMRI’s primary function is to convert differing protocols from feeder systems, like aircraft surveillance radars and weather detection systems, so that data from these systems can be used by the Host Computer System (HCS), the centerpiece information processing system in the en route centers. To perform this function, FAA spent over $38 million to develop PAMRI and it spends millions annually to maintain it. In addition to protocol conversion, PAMRI also performs data conversion of its disparate feeder systems. This conversion is necessary to remedy the data inconsistencies among ATC systems that feed HCS. These data inconsistencies extend beyond just those systems that interface with HCS. For example, FAA has hired a contractor to write an interface so that the Center TRACON Automation System (CTAS) can talk to the Automated Radar Terminal System (ARTS) IIIE. The cost of this interface is estimated at $1 million. In effect, this interface is a “mini-PAMRI.” Although some of the systems incompatibilities arise from the fact that FAA’s current ATC systems span several generations of computer systems, other incompatibilities are the result of FAA’s failure to adopt and enforce a systems architecture. According to a July 1996 FAA report baselining the ATC data management environment, ATC data inconsistencies have resulted from a lack of data standards and policies across the ATC systems. Systems written in many application programming languages are more difficult and expensive to modify and maintain than systems written in fewer languages. For example, for each language, programming staff must be trained and provided support software (compilers, debuggers, program libraries, etc.), and both the training and suite of support software must be updated and maintained. A software subarchitecture is essential to standardize the languages to be used and to institutionalize process standards or methodologies for designing, coding, testing, and documenting software projects. Software applications associated with 54 operational ATC systems have been written in 53 programming languages (these 53 include 19 assembly languages). Since most of the ATC languages are obsolete, there is no readily available cadre of newly trained programmers and current and future maintenance becomes even more difficult and costly. For example, the Automated Radar Terminal Systems (ARTS) are written in Ultra, an obsolete assembly language. Furthermore, no restrictions are currently being placed on application language choices for new systems development. For example, a new system that is currently being developed, the Display System Replacement (DSR), is to be written in three programming languages—Ada, C, and assembly. Ada is not used in any other existing ATC system. AUA officials told us that the five AUA IPTs are primarily using C, C++, and Ada to develop new ATC systems. However, we found three additional languages and several versions of assembly language also being used to develop new ATC systems. Software maintenance is a significant FAA expense. To illustrate, the software for the Host Computer System (HCS), its backup—the Enhanced Direct Access Radar Channel (EDARC)—and PAMRI cost $63.6 million annually to maintain. Until a software subarchitecture is developed that is based on a systematic analysis of the needs of current and planned operating environments and defines the languages to be used in developing ATC systems, FAA will continue to experience language proliferation and be faced with difficult and costly software maintenance. FAA plans to migrate its highly proprietary ATC systems to open operating environments. An open environment is one that is based on vendor-independent, publicly available standards. If properly planned and implemented, an open system environment supports portable and interoperable applications through standard services, interfaces, data formats, and protocols. Although the plan to evolve to an open environment is a wise one, important choices have to be made consistently across ATC systems to derive the expected benefits (e.g., portable applications, system interoperability). In particular, the open system standards for the collective system of systems must be carefully and thoroughly analyzed in light of systemwide requirements, and the most appropriate standards must be selected. The rigor associated with developing a systems architecture can ensure such analysis. Currently, this systemwide analysis is not occurring. Instead, most of the IPTs that are implementing open systems standards are doing so independently. Such a nonstandard migration approach may result in different open system options being selected, perpetuating architectural incompatibilities that require additional costs to overcome. For example, future FAA systems are to provide information to controllers through networked workstations. Two open systems protocol standards that IPTs could independently choose for passing information—ethernet and token-ring—are incompatible. Evolution to an open systems environment would also allow FAA to share software among systems with common functionality. For instance, FAA officials told us that 40 percent of the en route flight data processing (FDP) functionality is identical to the oceanic FDP functionality. This 40 percent equates roughly to about 60,000 lines of code. To their credit, FAA officials told us that the oceanic and en route IPTs have agreed to look at opportunities to share software between the replacement systems that perform FDP functions. However, without a guiding systems architecture that specifies specific open systems standards, FAA will likely not develop the oceanic and en route replacement systems that are to perform the FDP functions to common standards, thus precluding the opportunity to share software components. Because it has no complete and comprehensive systems architecture to guide and constrain the ATC systems modernization program, FAA continues to spend nearly $2 billion annually on “stovepipe” systems in an environment where system interoperability is an absolute necessity. To achieve interoperability, FAA is forced to develop and maintain costly system interfaces and incurs higher than need be system development and maintenance costs and reduced systems performance. We recommend that the Secretary of Transportation direct the FAA Administrator to ensure that a complete ATC systems architecture is developed and enforced expeditiously and before deciding on the architectural characteristics for replacing the Host Computer System. In commenting on a draft of this report, DOT and FAA officials generally agreed with our recommendation, which requires FAA to define and enforce a complete ATC-wide systems architecture. At the same time, however, the officials stated that (1) FAA’s informal mechanisms for attaining system compatibility (e.g., informal communication among system development teams and circulation of individual system specifications among these teams for review and comment) are sufficient and are working well; and (2) the architectural definition efforts underway within individual development teams and these teams’ parent organizations, once completed, will effectively augment these informal processes. The many examples provided in the report in which FAA incurs added costs to compensate for system incompatibilities arising from the lack of an ATC architecture provide clear evidence that FAA’s informal mechanisms are neither sufficient nor working well; and there is no logical rationale to support or explain FAA officials’ view that the efforts of the individual teams will somehow coalesce into an effective approach to ATC-wide architectural definition and enforcement. It is clear that effectively modernizing a system of systems as technologically complex, expensive, interdependent, and safety-critical as the ATC system requires more than stovepipe architectures linked and enforced by informal communications. Accordingly, we strongly recommend that FAA formally define and enforce an ATC-wide systems architecture. The officials also stated that most of FAA’s legacy systems pre-date the advent of architectural standards, and that it is thus system age rather than FAA’s lack of a systems architecture that is primarily to blame for existing system incompatibilities. As stated explicitly in the report, some incompatibilities exist because some systems pre-date currently available technology and standards. However, other system incompatibilities are the result of FAA’s failure to adopt and effectively enforce a technical architecture. Furthermore, until FAA completes and enforces its systems architecture, similar incompatibilities will recur in new ATC systems. The officials also commented that formally prescribed and enforced architectural standards could inhibit product team flexibility and creativity in acquiring ATC systems. They added that while they support the use of standards and are trying to move in that direction, they prefer a less formal approach to standards implementation and enforcement. This position has no merit. A well planned architecture that is enforced in a thoughtful and disciplined manner ensures compatibility and interoperability among different systems without unduly constraining internal system characteristics. The lack of such an architecture fosters not innovation but incompatibility and waste. FAA’s current approach to ATC architectural development, maintenance, and enforcement is not effective. The office that is responsible for developing and maintaining the NAS, or logical systems architecture, has no budgetary or organizational authority to enforce it, and no FAA organizational entity is responsible for developing and enforcing an ATC-wide technical architecture. As a result, ATC projects can be funded that do not comply with the ATC logical architecture (deviations are not supported by a documented waiver justifying the noncompliance) and there is no complete ATC technical architecture. Until FAA assigns a single organizational entity the responsibility and authority needed to develop, maintain, and enforce an ATC logical and technical systems architecture, FAA will not effectively address ATC system incompatibilities. If a complete systems architecture is to be effectively developed, maintained, and enforced, some organizational entity must (1) be assigned the responsibility and be held accountable for doing so, (2) be given sufficient resources to accomplish the task, (3) have expertise in information technology, and (4) have organizational and/or budgetary authority over all systems development and maintenance activities. One model for implementing this is embodied in the Clinger-Cohen Act, which requires that major federal departments and agencies establish CIOs that report to the department/agency head and are responsible for developing, maintaining, and facilitating the implementation of systems architectures. FAA does not have an effective management structure for developing, maintaining, and enforcing a logical ATC systems architecture. The Office of Systems Architecture and Investment Analysis, which is under the Associate Administrator for Research and Acquisitions, is responsible for developing and maintaining the logical ATC architecture (i.e., the NAS architecture), and has made good progress over the last 2 years in developing and maintaining one (see chapter 3). However, this office is not responsible for enforcing the logical architecture and cannot enforce it because it has neither organizational nor budgetary authority over the IPTs that develop ATC systems or the units that maintain them. (See figure 4.1 for the Office of Systems Architecture and Investment Analysis’ organizational position in relation to the Administrator, CIO, IPTs, and maintenance activities.) FAA officials say that they use the capital investment planning process to enforce the logical architecture. Under this process, various FAA organizations, including the CIO, evaluate and compare competing NAS projects and choose projects to be funded. Four criteria are considered in scoring competing investment options and deciding among them: (1) sponsor (i.e., user) support; (2) mission importance; (3) technology maturity/NAS architecture conformance; and (4) cost effectiveness. Each criterion carries a standard weighting factor that is to be consistently applied to all proposed projects in producing a project score: sponsor support and technology maturity/NAS architecture conformance each carry a weight of 20 percent, while mission importance and cost effectiveness each carry a weight of 30 percent. According to FAA, projects that do not conform to the NAS architecture can be approved under this process. While deviations from the architecture may sometimes be warranted, the decision to waive the requirement for architectural conformance should be made only after careful, thorough, and documented analysis. FAA’s investment process does not require such analysis. FAA has drafted new acquisition management guidance that modifies the above described capital investment planning process. FAA officials stated that the new process will require that ATC projects conform to the logical architecture and that waivers to this requirement will be granted only with convincing and documented justification. This is not the case. The draft guidance permits each team to choose its investment criteria and does not even require that architectural conformance be among them. As a result, this draft guidance does not constitute an effective approach to architectural enforcement. FAA also lacks an effective management structure for developing, maintaining, and enforcing a technical ATC systems architecture. No organization in FAA is responsible for technical ATC architecture. Instead, FAA has permitted a “hodge podge” of independent efforts scattered across its ATC modernization organization to emerge with no central guidance and coordination. For example, the Office of Systems Architecture and Investment Analysis is developing systems security guidance and a menu of architectural standards, while other offices have initiated efforts to develop additional technical architecture guidance (see chapter 3). As a result, there is no ATC-wide technical architecture, and it is unlikely that FAA will produce one in the near future. Until the authority, responsibility, and resources to develop, maintain, and enforce a complete ATC systems architecture are clearly assigned to a single FAA organizational entity, FAA will continue to build incompatible and unnecessarily expensive and complex ATC systems. We recommend that the Secretary of Transportation direct the FAA Administrator to establish an effective management structure for developing, maintaining, and enforcing the complete ATC systems architecture. Specifically, the Administrator should (1) assign the responsibility and accountability needed to develop, maintain, and enforce a complete ATC systems architecture to a single FAA organizational entity, (2) provide this single entity with the resources, expertise, and budgetary and/or organizational authority needed to fulfill its architectural responsibilities, and (3) direct this single entity to ensure that every ATC project conforms to the architecture unless careful, thorough, and documented analysis supports an exception. Given the importance and the magnitude of the information technology initiative at FAA, we recommend that a management structure similar to the department-level CIOs as prescribed in the Clinger-Cohen Act be established for FAA. In commenting on a draft of this report, DOT and FAA officials generally agreed with our conclusions and recommendations. However, the FAA Deputy Director for Architecture and System Engineering stated that FAA is drafting a revision to its investment management policy that, once approved, will change the capital investment planning process and associated investment decision criteria described in our report. Our review of this draft guidance disclosed that it does not require that every ATC project conform to the logical architecture. Instead, the draft guidance permits each team to choose its investment criteria and does not require that architectural conformance be among them. (NARACS) Collects surface observations data from AWOS and ASOS and distributes these data to weather processing and display systems. Provides capability for real-time and nonreal-time monitoring of en route center systems, remote control of equipment and facilities, communications/coordination, and system security. Provides backup air-to-ground radio voice communications service in the event of a failure of the primary or secondary air-to-ground radio system. Provides flight data input/output print capability. Provides display capability that will be replaced by DSR. Provides display capability that will be replaced by DSR. Provides display capability that will be replaced by DCCR, which will in turn be replaced by DSR. Provides display capability that will replace DCC. Provides character and image display capability that will be replaced by DSR. Provides an inter-facility multiplexed data transmission network. Down-Scoped Radio Control Equipment Controls local and remote air-to-ground radios. Make legal recordings of all voice communications between air traffic controllers and pilots. Enhanced Direct Access Radar Channel Provides a backup to HCS for radar processing, and radar track and display processing. Provides flight data input/output capability by transferring flight data inter-/intrafacility. Provides the processing capability to support AFSS workstations and automated pilot briefings, and maintains a national flight service database. Processes radar surveillance data, associates flight plans with tracks, processes flight plans, performs conflict alerts, and processes weather data. Provides capability for data entry and display and provides a standard serial data interface to connect to a RMS. Provides capability for real-time monitoring and alarm notification, certification parameter data logging, automatic record keeping and information retrieval, and trend analysis, failure anticipation, remote control of equipment and facilities, diagnostic and fault isolation, remote adjustments, and system security. Provides weather data processing and display. Make legal recordings of all voice communications between air traffic controllers and pilots. (continued) National Radio Communications System Provides minimum essential command, control, and communications capabilities to direct the management, operation, and reconstitution of the National Airspace System during a national or local emergency. Provides interfacing capability to HCS. Provides communication network for transmitting data via addressed packets. Provides the capability to request and display NEXRAD weather data. Provides aircraft situation display capability for the controller that is to be replaced by DSR. Controls local and remote air-to-ground radios. Provides FDIO remote print capability. Provides National Radio Communications System emergency communications essential during and after earthquakes, hurricanes, and tornadoes. Provides air-to-ground and voice communication services and ground-to-ground voice communication services between controllers, other ATC personnel, and others at the same and different en route centers and other ATC facilities. Provides track generation and traffic display as part of the Oceanic Traffic Planning System. Oceanic system that displays aircraft position based on extrapolations from flight plans. Provides a display showing the location of aircraft across the country that is used for strategic planning purposes. Provides national level management and monitoring of the airspace system, including air traffic flow, aircraft operations, and en route sector and airport utilization and loading. WMSCR (NAWPF) To Navaid Mon. EDARC Repl. MIP (ASR) (Sur. Fusion) FAA Transmis. Facil. Collects surface observation data from AWOS and automated surface observing system (ASOS) and distributes these data to weather processing and display systems. Provides the capability to transfer data in digital form between the aircraft and the ground or between aircraft by means other than voice communications. Maximizes use of airport capacity by providing decision aids to en route and terminal controllers. This is the area in the en route center where the meteorologists perform their functions using the various systems that provide them with weather information. Supports networked aeronautical telecommunications services within United States domestic and oceanic airspace. Calculates departure sequence, from push-back to time over fix, and includes runway configuration, gate position, aircraft performance, and flow restrictions, for a group of airports. Displays departure sequence lists in towers and at the Traffic Management Unit (TMU). Provides modern ATC workstations to support programs like Weather and Radar Processor (WARP), Automated En Route Air Traffic Control (AERA), CTAS, and Data Link. Provides new controller data entry and display devices. Provides an interface capability with the Host computer and system and the Enhanced Direct Access Radar Channel (EDARC). Provides the pilot with convenient access to pre-flight aeronautical and weather information to plan the flight. Also allows pilots to input instrument flight rules (IFR), International Civil Aviation Organization (ICAO), or Visual Flight Rules (VFR) flight plans into the system . Enhanced Direct Access Radar Channel Provides a backup to the host computer system (HCS) for radar processing, and radar track and display processing. Provides national monitoring, prediction, planning, re-routing, “ground hold”, and flow management. Provides satellite-based air-to-ground and ground-to-ground communications capability. Provides global navigation signals for use in determining 4-D (dimensional) time/position data. Communications system interface between the En Route Center and external systems. Will process radar surveillance data, associate flight plans with tracks, process flight plans, perform conflict alerts, and process weather data. Provides voice communication services between controllers and aircraft (air-to-ground), and between controllers and other personnel within or among different ATC facilities, such as towers, TRACONs, and Flight Service Stations (ground-to-ground). (continued) Provides integration of terminal area weather products and displays. Provides addressable-beacon interrogation and reply. Provides the capability for digital communications between aircraft, various air traffic control functions, and weather databases through a digital interface with the ATC automation system. Provides capabilities for real-time monitoring and alarm notification, certification parameter data logging, automatic record keeping and information retrieval and trend analysis, failure anticipation, remote control of equipment and facilities, diagnostic and fault isolation, remote adjustments, and system security. National Airspace Data Interchange Network (Packet Switch Network) Provides a packet-switched wide-area data communications network which interconnects major ATC facilities. Provides precipitation, wind velocity, and turbulence data sensing and processing. Replaces several systems, (the Flight Service Automation System, Aviation Weather Processor, and the Flight Service Data Processing System). Provides line-of-sight ultra high frequency (UHF) bearing and range data to aircraft. Provides communciations system for ATC traffic flow managment personnel responsible for management and monitoring of current air traffic flow, aircraft operations, en route sector and airport utilization and loading, and future system utilization. ATC facilities that sequence and separate aircraft as they approach and leave busy airports, beginning about 5 nautical miles and ending about 50 nautical miles from the airport, and generally up to 10,000 feet above the ground, where en route centers’ control begins. Host computer system interface device and en route center local area network that establishes a common interface to the host computer and an updated telecommunications infrastructure. Provides digital communications capability using the VHF radio band. The VOR supports determination of aircraft position and airway definition by transmitting azimuth signals. The DME provides slant range between the aircraft and the DME locations. Provides a voice communications system which performs the intercom, interphone, and air/ground voice connectivity and control functions needed for ATC operations in an en route center. Transmits wide area differential corrections for GPS signals. Provides the capability to use GPS for precision runway approach guidance. Collects, processes and disseminates NEXRAD and other weather information to controllers, traffic management specialists, pilots, and meteorologists. It will provide a mosaic product of multiple NEXRAD information to DSR for display with aircraft targets. FAA unique (Custom Raytheon product) FAA unique (Custom Raytheon operating system) FAA unique (Custom Raytheon language) FAA unique (Custom Raytheon data management software) FAA unique (running prototype of HCS operating system) FAA unique (Custom data management software) FAA unique (Custom made Raytheon product based on Motorola 68000 processors) FAA unique (Custom Raytheon operating system) FAA unique (Custom Raytheon language) FAA unique (Custom Raytheon data management software) Sun Solaris (at least version 2.5) PAMRI is a hardware/firmware data/protocol converter. Randolph C. Hite, Senior Assistant Director Keith A. Rhodes, Technical Assistant Director Madhav S. Panwar, Senior Technical Advisor David A. Powner, Senior Information Systems Analyst Robert C. Reining, Senior Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Federal Aviation Administration's (FAA) air traffic control (ATC) modernization effort, focusing on: (1) whether FAA has a target architecture and associated subarchitectures, to guide the development and evolution of its ATC systems; and (2) what, if any, architectural incompatibilities exist among ATC systems and the effect of these incompatibilities. GAO found that: (1) FAA lacks a complete systems architecture, or overall blueprint, to guide and constrain the development and maintenance of the many interrelated systems comprising its ATC infrastructure; (2) FAA is developing one of the two principal components of a complete systems architecture, the "logical" description of FAA's current and future concept of ATC operations as well as descriptions of the ATC business functions to be performed, the associated systems to be used, and the information flows among systems; (3) however, FAA is not developing, nor does it have plans to develop, the second essential component, the ATC-wide "technical" description which defines all required information technology and telecommunications standards and critical ATC systems' technical characteristics; (4) the lack of a complete and enforced systems architecture has permitted incompatibilities among existing ATC systems and will continue to do so for future systems; (5) overcoming these incompatibilities means "higher than need be" system development, integration, and maintenance costs, and reduced overall systems performance; (6) because there are no standards for programming languages or open systems, ATC systems' software has been written in many different application programming languages, often exhibiting proprietary system characteristics; (7) this not only increases software maintenance costs but also effectively precludes sharing software components among systems; (8) without a technical architecture specifying the information technology standards and rules, the opportunity to share software will likely be lost; (9) in some cases, system incompatibilities exist because the technology and standards now available to permit system integration and interoperability did not exist or were only emerging when the systems were designed and developed; (10) other system incompatibilities are the result of FAA's failure to adopt and effectively enforce a technical architecture; (11) by failing to formulate a complete systems architecture, FAA permits and perpetuates inconsistency and incompatibility; (12) as a result, future ATC system development and maintenance will continue to be more difficult and costly than it need be and system performance will continue to be suboptimal; (13) FAA's management structure for developing, maintaining, and enforcing an ATC systems architecture is not effective; and (14) instead, processes now in place permit the acquisition of architecturally non-compliant systems without special waiver of architectural standards. |
The Departments of the Army, the Navy, and the Air Force each have their own educational institutions (academies) to produce a portion of each branch’s officer corps: U.S. Military Academy (West Point, N.Y.), established in 1802; U.S. Naval Academy (Annapolis, Md.), established in 1845; and U.S. Air Force Academy (Colorado Springs, Colo.), established in 1954. The academies are structured to provide a curriculum critical to the development of successful future officers in academic, military, and physical areas of achievement. Additionally, the academies emphasize the moral and ethical development of students through their respective honor codes and concepts. There are approximately 4,000 students enrolled at each of the three service academies at any given time, each comprising four classes. In December 2002, Congress authorized an annual increase of up to 100 students until the total number reaches 4,400 for each academy. In 2002 the Military Academy graduated 968 students; the Naval Academy 977 students; and the Air Force Academy 894 students. Faculty at the U.S. Military Academy and the U.S. Air Force Academy are comprised predominantly of military officers (79 and 75 percent, respectively), while at the U.S. Naval Academy 59 percent of the faculty are civilians. Table 1 shows the composition of the faculty at the service academies. DOD reports that the total cost to operate all three academies in fiscal year 2002 was $990.7 million. Table 2 shows the reported operating costs and cost per graduate for each academy from fiscal year 1999 through fiscal year 2002. We did not independently verify these costs. Prospective students must meet basic eligibility requirements for appointment to an academy. They must (1) be unmarried, (2) be a U.S. citizen, (3) be at least 17 years of age and must not have passed their twenty-third birthday on July 1 of the year they enter an academy, (4) have no dependents, and (5) be of good moral character. After determining eligibility, a candidate submits an application to a preferred academy or academies. Each submitted application is required to include information such as, but not limited to, the candidate’s (1) SAT scores (or American College Testing—ACT—examination scores); (2) high school grade point average (and class rank, if possible); (3) physical aptitude scores; (4) medical examination results; and (5) extracurricular activities. The academies admit those candidates that have secured a nomination and who represent, in the opinion of academy officials, the best mixture of attributes (academic, physical, and leadership) necessary to ensure success at the academies and as military officers. The military academies use a “whole person” method to assess potential candidates in three major areas: (1) academics, (2) physical aptitude, and (3) leadership potential. Each academy uses the same basic approach. Admissions assessments are weighted toward academic scores that include objective tests and high school performance. Leadership potential is measured by assessing athletic and non-athletic extracurricular activities. Subjective assessments of potential candidates in these major areas also contribute to final admissions “scores.” Such assessments include interviews with prospective candidates, teacher/coach evaluations, and analyses of writing samples. Though medical criteria differ between services, the medical examinations are conducted according to the same standards, under a joint DOD Medical Examination Review Board that manages the medical examination process and records for applicants to all academies. Each academy is authorized to permit up to 60 foreign students to attend at any given time on a reimbursable basis by their country of origin. This number does not count against the authorized student strength of the academies. The admission of foreign students is covered by separate policies and procedures. Our review was limited to the policies and procedures for admitting U.S. citizens to the academies. Figure 1 shows the basic steps in the admissions process for all U.S. applicants. Students who are disenrolled from an academy after the start of their third year may be required to complete a period of active duty enlisted service of up to 4 years or may be required to reimburse the federal government for the cost of their education. Those who are disenrolled in their first 2 years do not incur an active service or reimbursement obligation. The United States Military Academy’s admissions evaluation considers academics, leadership, and physical aptitude. Academic considerations include above-average high school or college academic records as well as strong performance on SAT/ACT. Additionally, the Military Academy considers recommendations from English, mathematics, and science teachers. The leadership potential considers demonstrations of leadership and initiative in sports, school, community, or church activities and strong recommendations from faculty and community leadership and is a more subjective assessment of character. Physical aptitude is based on a scored standardized test. This test is made up of pull-ups for men or the flexed- arm hang for women, push-ups, standing long jump, basketball throw, and shuttle run. Figure 2 shows the areas considered and the weights assigned to each area in the U.S. Military Academy’s whole person admissions process. The United States Naval Academy’s admissions evaluation considers academics, leadership, physical aptitude, and technical interest. Academic considerations include above-average high school or college academic records as well as strong performance on SAT/ACT. Additionally, the Naval Academy considers recommendations from English and mathematics teachers. Assessment of leadership potential represents a subjective evaluation of character in which the academy considers demonstrations of leadership in terms of extracurricular activities in sports, school, community, or church and strong recommendations from faculty and community leadership. Physical aptitude is based on a scored, standardized test consisting of pull-ups for men or the flexed-arm hang for women, push-ups, standing long jump, basketball throw, and shuttle run. Additionally, the Naval Academy considers the technical interest of a prospective student, which is measured through a questionnaire in the application packet and used to gauge interest in pursuing a technical degree. The intent of this requirement is to admit students that are interested in pursuing technical degrees, specifically nuclear and maritime engineering. The admissions board can also apply further points to an applicant’s overall whole person score based on further consideration of an applicant’s record, including such things as the results of the evaluation form filled out by the Naval Academy representative who interviewed the applicant. Figure 3 shows the areas considered and the weights assigned to each area in the U.S. Naval Academy’s whole person admissions process. The United States Air Force Academy’s admissions evaluation considers academics, leadership, and an assessment by the selections panel. Academic considerations include above-average high school or college academic records as well as strong performance on SAT/ACT. Additionally, the Air Force Academy considers recommendations from English and mathematics teachers. Under leadership potential, the academy considers extracurricular activities in sports, school, community, or church and strong recommendations from faculty and community leadership. Finally, the Air Force Academy Selections Panel makes an assessment of all potential students. This assessment is composed of a pass/fail score from the physical aptitude examination and the evaluation of the academy’s liaison officer evaluation, made after interviewing the applicant. The physical aptitude examination is made up of pull-ups for men or the flexed-arm hang for women, push-ups, standing long jump, basketball throw, and shuttle run. The leadership potential area and the admissions board include the more subjective assessments of a potential student. Figure 4 shows the areas considered and the weights assigned to each area in the U.S. Air Force Academy’s whole person admissions process. The President of the United States alone appoints candidates to the academies. Before receiving an appointment, all candidates must secure one or more nominations according to the following categories: congressional (including a U.S. senator, representative, delegate, or the service-connected (including, among others, children of disabled veterans, enlisted personnel in the active or reserve components, and students from ROTC programs or other designated honor school graduates); and other (including the academy superintendents’ nominees and other nominees to bring the incoming class to full strength). Figure 5 shows the approximate distribution of categories of academy nominations, based on the types and numbers of nominees per category allowed by law. Oversight of the academies is the responsibility of three principal organizations: OUSD/P&R, the service headquarters, and the board of visitors of each academy. According to Department of Defense Directive 1322.22 (Service Academies), OUSD/P&R serves as the DOD focal point for matters affecting the academies and has responsibility to assess academy operations and establish policy and guidance for uniform oversight and management of the military academies. The military departments perform the primary DOD oversight function for their respective academies. The superintendent of each academy reports directly to the uniformed head of his respective service (the Chiefs of Staff for the Army and the Air Force and the Chief of Naval Operations for the Navy), in accordance with the chain of command for each service. Each academy also has a board of visitors, mandated by law, that is comprised of congressional members and presidential appointees. These boards focus attention and action on a wide range of operational and quality of life issues at the academies. As educational institutions, the service academies are also overseen by several nongovernmental organizations that are outside DOD purview. Each academy undergoes periodic review by a higher-education accreditation body associated with its region of the country, usually involving a full review every 10 years with an interim review every 5 years. The accreditation bodies review such areas as core curriculum, strategic planning, self-assessments, diversity of faculty and students, and faculty credentials. The athletic programs of the academies are also subject to periodic certification by the National Collegiate Athletic Association. This body reviews academy athletics in terms of such issues as finances and impact on the education mission of the academies. We limited our review of oversight of the academies to DOD organizations and the boards of visitors. The OUSD/P&R, the services, and the academies’ boards of visitors conduct many oversight activities, but they lack a complete oversight framework. A complete oversight framework includes not only clear roles and responsibilities, but also performance goals and measures against which to objectively assess performance. Such elements embody the principles of effective management in which achievements are tracked in comparison with plans, goals, and objectives and the differences between actual performance and planned results are analyzed. Without formal goals and measures, oversight bodies do not have sufficient focus for their efforts and cannot systematically assess an organization’s strengths and weaknesses nor identify appropriate remedies that would permit DOD to achieve the best value for the investment in the academies. In a prior report, GAO concluded that better external oversight of the academies was needed to provide useful guidance and suggestions for improvement. The report recommended that DOD improve oversight of the academies through such measures as establishing a focal point for monitoring academy issues in the Office of the Secretary of Defense and establishing guidance on uniform cost reporting. OUSD/P&R and the services have established clear roles and responsibilities for oversight of the academies, with the former serving as the focal point for issues affecting all academies and the latter having direct oversight authority over their respective academies. DOD established guidance in 1994 for the oversight of the academies and for uniform reporting of costs and resources. OUSD/P&R is directly involved in those policy issues that affect all academies and require DOD-level attention and legislative matters. For example, the office was recently the DOD focal point on the issue of increasing authorized enrollment at the academies from 4,000 to 4,400. With respect to the academies, the office is chiefly concerned with monitoring the degree to which the services are meeting their goals for the accession of new officers. The office also coordinates major studies that affect the academies, such as a November 1999 report on the career progression of minority and women officers. The services are responsible for direct oversight of their respective academies; and the academies are treated similarly to major military commands. The superintendents of the academies are general/flag officers who report directly to the uniformed heads of their services (the Chiefs of Staff for the Army and the Air Force and the Chief of Naval Operations for the Navy). In addition to overseeing the academies’ budget through the same approval process as a major command activity, the services oversee the academies’ operations and performance primarily through the academies’ goal of meeting service officer accession targets. The superintendents are responsible for meeting those targets and, in so doing, are given wide discretion in such areas as modifying their specific admissions objectives and the process for matching graduates with service assignments. The service headquarters use a number of mechanisms to oversee academy performance. For example, each service headquarters provides officer accession targets to the academies so that the assignment of graduates and the make up of incoming student classes can be modified as necessary. In addition to general numbers of officers, each service also has a number of specialty officer fields that need to be filled, and the services also monitor the extent to which the academies will be able to meet those accession goals. The services also directly oversee the academies by requiring the superintendents to report on and discuss their operations. For example, the Air Force uses an annual forum of the most senior Air Force officers to focus on the Air Force Academy with respect to how it is meeting the needs of the operational Air Force. The Navy uses similar senior officer conferences and frequent interaction between the superintendent and Navy headquarters to conduct oversight. The Army uses the U.S. Military Academy Forum, comprised of senior Army officers, to address academy operations issues. The superintendents of the three academies also hold annual meetings to discuss issues common to all academies. These mechanisms have resulted in such academy actions as curriculum changes to increase the number of technical degree majors, increasing language requirements, and increasing the number of students attending the academies. While OUSD/P&R and the services conduct a wide variety of oversight activity, there are few stated performance goals against which to measure academy operations and performance. Each of the academies has a strategic plan that is focused on providing quality military and professional training and education in order to commission highly capable junior officers. These plans are approved by the service headquarters but are not generally used by the services as benchmarks against which to measure academy performance, and they do not contain specific goals against which to measure student performance. OUSD/P&R is required to assess and monitor academy operations based on the information provided in annual reports it requires from the service secretaries. These reports provide data on various aspects of performance, such as student demographics and trends, student quality, admissions and attrition trends, compensation for students and faculty, leadership and honor systems, and incidents of indiscipline. The reports provide OUSD/P&R and the services with information on current and past performance for academy operations, but apart from officer accession goals, neither OUSD/P&R nor the services have specific stated performance goals against which to compare the information provided in the assessment reports, thus they do not have an explicit basis for judging the adequacy of their performance. For example, the data collected by the academies show that graduation rates have increased in the last 10 years; however, there is no stated goal for a graduation rate against which to judge whether this rate of increase is adequate. Other data collected by the academies indicate that the percentage of females and minorities has fluctuated over the last 3 years, but apart from admissions targets used by the U.S. Military Academy, there are no stated goals against which to assess these trends. Additionally, academy officials regularly analyze data on student body performance to determine the extent to which admissions standards can be changed to affect student body performance. However, there are no stated goals for student body performance, apart from minimum graduation standards, that might help the academies and other oversight bodies assess overall student performance. The oversight efforts of each academy’s board of visitors are similarly limited by the absence of sufficient performance goals and measures. Each of the academies has a board of visitors, mandated by law and comprised of Members of Congress and presidential appointees, that is outside the DOD chain of command. The boards have a broad legal mandate to inquire into all aspects of academy operations. The boards meet several times a year to be briefed on and discuss academy operations and must conduct an annual visit to their respective academies. During these visits, the boards are briefed by academy staff on such issues as admissions, curriculum, recruiting, athletics, morale and welfare, and construction programs; they also interview students to obtain their perceptions of life at the academies. The boards also address inquiries to academy staff, which are usually followed up at subsequent meetings, and they make suggestions to improve operations or quality of life at the academies. For example, boards of visitors have recommended increased recruiting of qualified minority applicants from various congressional districts and increased surveying of students on quality of life issues. The boards submit annual reports to the President on the status of and issues at the academies but do not evaluate academy operations and performance against established performance goals. The boards of visitors do not have dedicated staffs to conduct their work, and though board members may inquire into any aspect of academy operations, the agenda is set largely by the briefings presented to the boards by academy officials. Academy officials with whom we spoke were generally satisfied with the oversight provided by the boards of visitors, though there were concerns at the Air Force Academy about poor attendance by board members during annual visits to the academy. The academies do not grant waivers from academic criteria but do not have absolute minimum scores for admission. Under the whole person approach, the academies can admit some applicants whose academic scores are lower than might normally be competitive for admission, but who in their totality (academics, physical aptitude, and leadership potential) are deemed an acceptable risk and qualified to attend an academy. This admissions approach is consistent with the intent of the academies to admit students who also demonstrate leadership and initiative characteristics, which cannot be quantified by purely objective scoring methods. When conducting their admissions processes, the academies do not set absolute minimum scores for academic ability. Rather, they establish a range of scores that would be considered competitive, based on past incoming class performance and academy research on the overall quality of the applicant pool. Prior to 2002, the Air Force Academy set absolute minimum academic scores, and a waiver was required to further consider an applicant who fell below that minimum, no matter how high his or her scores in the leadership area. However, the Air Force Academy no longer has absolute minimums and uses the same competitive range approach as the other academies. Under this approach, if an applicant’s academic score is lower than the competitive range guidelines, academy officials have some flexibility to further consider the applicant. Academy officials will re-examine the applicant’s record for information that might provide further insight about his or her academic achievement. For example, officials may contact high school teachers to inquire about the types and difficulty of the classes the applicant has been taking and his or her performance in those classes. Academy officials will also weigh the extent to which the leadership component of the applicant’s whole person score offset the low component. The applicant is considered a risk and is evaluated through a deliberative process by academy officials on the basis of their judgment of whether the applicant is fully qualified and capable of succeeding at that academy. The subjective nature of this approach is consistent with the intent of the whole person concept, by which the academies want to admit students who also demonstrate leadership characteristics that cannot be quantified by purely objective scoring methods. Academy officials do not consider these judgments to constitute a waiver of academic standards, but rather a judicious assessment of the whole person. The process for assessing those applicants whose academic scores are lower than might normally be competitive is nonetheless similar to the former Air Force Academy process for granting waivers. With over 10,000 applicants for each academy each year and about 1,200 students admitted, the academic standards are high. Academy data show that the academic quality of the applicants has remained high over the past 4 years, and the competitive ranges for academic scores used by the academies have remained the same or have increased during this time. However, it is possible for students to be admitted whose academic scores were not as competitive as some other applicants who may not have been admitted. Senators, representatives, and delegates may submit up to 10 nominees for each student vacancy available to him or her per academy. They may choose to designate one as a principal nominee. If an applicant receives a principal nomination and is in all other respects qualified, the academies must admit that applicant, even over an applicant on the same senator’s, delegate’s, or representative’s nomination list with higher academic and/or whole person scores. The other nominated names become alternates for possible admission later in the admissions process. Though some academies award credit for the extent to which an applicant surpasses the standards of the physical aptitude examination, there are minimum standards for the physical test that must be met. None of the academies uses a system of “waivers,” except for medical conditions. An applicant can be waived for a medical condition, based on the deliberation and judgment of DOD medical personnel and the academy superintendent. For example, an applicant who is disqualified due to a vision condition may apply for and receive a waiver, based on subsequent surgical vision correction or determination by the academy superintendent that the applicant would be able to serve on active duty without the vision condition being a problem. In our review of the academy classes that started in 1998 (class of 2002), we found differences among various groups of students in their admissions scores and similar differences in their performance while at the academies, but the differences were not significant in magnitude. In terms of performance after admission to the academies, differences between these student groups and the class as a whole were also not sizable. We reviewed data for the following distinct groups: minorities, academy preparatory school graduates, recruited athletes, lower 30 percent of class by academic admissions scores. For the class data we reviewed, minorities, academy preparatory school graduates, recruited athletes, and prior enlisted students all had lower average admissions scores than the average for the class as a whole, though these differences varied. The differences between groups and the class as a whole were not sizable, generally falling within 5 percent. Those differences that were statistically significant and outside the 5 percent range were still generally less than 10 percent of the class as a whole. Tables 3, 4, and 5, show the average admissions scores for the selected groups in the class that started in 1998 at the Military, Naval, and Air Force Academies, respectively. Although each academy uses the same fundamental whole person approach, they use different scales to calculate scores. Therefore, the academic and whole person scores cannot be compared across academies. Of those students in the lower 30 percent of the class in terms of academic admissions scores, about 44 percent were recruited athletes, between 25 and 31 percent were minorities, and between 20 and 34 percent were preparatory school graduates. Table 6 shows the percentage of the selected groups making up the lower 30 percent of the classes in terms of their academic admissions scores, by academy. We also found differences in performance after admission to the academies between selected groups and the class as a whole. For example, females at the Naval Academy had a lower graduation rate than the class as a whole, but they had a higher average academic grade point average (cumulative GPA) than the class as a whole and higher average class rank (order of merit). The differences in performance between the selected groups and the class as a whole were not sizable, generally falling within 5 percent. Those differences that were statistically significant and outside the 5 percent range were still generally less than 10 percent of the class as a whole. Tables 7, 8, and 9 show how the selected groups performed at the Military, Naval, and Air Force Academies, respectively. See appendix II for further information on comparisons of performance by defined student groups. Some groups—such as minorities, preparatory school graduates, recruited athletes, and students in the lower 30 percent of their class in terms of academic admissions scores—performed at lower levels on average in all categories than the class as a whole, but these differences varied between academies and by category and were not sizable. For example, one of the lowest average academic grade point averages for the groups we reviewed was 2.61 and the average for the class as a whole at that academy was 2.93. A 2.0 grade point average is required to graduate for academic and military averages. Similarly, the lowest graduation rate for the class we reviewed was 65 percent for the students in the lower 30 percent of their class in terms of academic admissions scores at one academy. The average graduation rate for the class as a whole was 74 percent. Our analysis of data for the students who entered the academies in 1998 (class of 2002) indicates that admissions scores are generally good predictors of performance at the academies. Of the admissions scores, the academic component of the whole person scores was often the best predictor of academic performance at the academies, and the whole person scores in their entirety were often the best predictors of military performance at the academies. Both academic and whole person admissions scores were good predictors of class rank. In general, whole person admissions scores were better predictors of graduation rate than the academic admissions scores alone. Although the service academies receive oversight from a number of organizations and have established guidance for that oversight that includes the reporting of a wide range of data on academy operations, without clear and agreed-upon performance goals, there is no objective yardstick against which to fully measure academy performance and operations, apart from the officer accessions goals currently used. Establishment of such performance goals is consistent with the principles of effective management and would enhance the quality of oversight already performed by OUSD/P&R, the services, and the academy boards of visitors, permitting them to more clearly note those areas in which the academies excel, highlight areas where improvement is warranted, and achieve the best value for the nation’s investment in the academies. To improve DOD oversight of the operations and performance of the service academies, we recommend that the Secretary of Defense direct the OUSD/P&R, in concert with the services, to further enhance performance goals and measures whereby the information required in annual assessment reports can be better evaluated. These performance goals should be developed for each academy and, where appropriate, in common for all academies. The specific goals should coincide with performance elements agreed upon by the services and OUSD/P&R and might include such things as graduation rates, demographic composition of student classes, assessments of officer performance after graduation, and other performance information already collected by the academies, including performance characteristics of various groups of students. In comments on a draft of this report, DOD agreed with our recommendation to further enhance performance goals and measures for the service academies whereby the information required in annual assessment reports can be better evaluated. DOD further stated that the Office of the Under Secretary of Defense for Personnel and Readiness OUSD/P&R will (1) monitor development of improved goals and measures by the service academies, to include facilitating the development of common performance goals where appropriate and (2) update DOD Directive 1322.22, Service Academies, as required. DOD’s written comments are included in their entirety in appendix III. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-5559 if you or your staff have any questions concerning this report. Key contributors are listed in appendix V. To assess the extent to which DOD oversees the service academies’ operations and performance, we interviewed officials at the Office of the Under Secretary of Defense for Personnel and Readiness; the Army, Navy, and Air Force headquarters; and the U.S. Military, U.S. Naval, and U.S. Air Force Academies. We reviewed documents on service and DOD oversight criteria and structures, reporting mechanisms, academy strategic plans, academy annual reports on operations and performance, boards of visitors’ minutes and reports, and superintendents conference reports. We also attended a U.S. Naval Academy Board of Visitors meeting at the Naval Academy in December 2002 and a U.S. Military Academy Board of Visitors meeting in Washington, D.C., in March 2003. Additionally, we reviewed criteria on the principles of effective management, such as those found in Standards for Internal Control in the Federal Government. To assess the extent to which academy applicants are granted waivers from academic admissions criteria, we interviewed officials from the Military, Naval, and Air Force Academies and reviewed documents on admissions policies, standards, and practices. We discussed with academy officials their execution of the whole person approach, including how they assess applicants’ records, the weights applied to the various components of the whole person score (academic, leadership, and physical aptitude), and the justification for points given to various aspects of an applicant’s scores. We also reviewed data from each academy on trends in academic admissions scores. During site visits to each academy, we observed the evaluation of applicant packages for the incoming class of 2007 by academy officials, including how the whole person approach was applied for admissions scores. We also observed meetings of senior officials at each academy where applicants’ records were evaluated and final admissions decisions were made. To assess the extent to which admissions and academy performance scores differ between various groups of students, we analyzed admissions scores and academy performance scores for all students who started at the three academies in 1998 and should have graduated in 2002. This represented the most recent group of students for which complete data were available. We requested and received from each academy a database that included data on both admission scores and information about students’ performance while attending the academy. We did not independently assess data reliability, but we obtained assurances about data completeness, accuracy, and reliability from academy officials responsible for maintaining data at each academy. We analyzed these data separately for each academy since each academy calculated admission scores or performance scores somewhat differently. We identified six major groups of students common to all academies: females, minorities, academy preparatory school graduates, recruited athletes, prior enlisted personnel, and students whose academic admission scores fell in the lower 30 percent of the entering class (we chose the latter group in order to capture information on students whose academic admissions scores may have been lower than might normally be competitive). Information specifying a student’s membership in each of these groups was provided in the databases from the academies. To assess differences, we first compared the mean performance scores for each group to the overall mean for each performance measure for the entire class. See appendix II for details on the results of our analysis of the relationships between admissions and performance scores. In addition, we assessed the relationship between admissions scores and performance at the academies by using the whole person admission score and the academic component of the admissions score. We estimated the effects of those scores on four measures of performance for students at the academies: (1) cumulative grade point average (GPA), (2) cumulative military performance average (MPA), (3) order of merit (class standing), and (4) graduation rate. We used cumulative GPA upon graduation as an indicator of academic performance at the academies and military performance averages upon graduation as an indicator of military performance at the academies. Order of merit is a measure of class standing at each academy that combines academic and military grade performance and is a final rank for each graduating student. At both the Air Force Academy and the Naval Academy, order of merit is an actual class rank number. At the Military Academy, however, order of merit could range between 0 and 4.0 and was given on the same scale as grade point averages. For each academy, we analyzed the association of both the academic component scores and whole person admission scores with each of the performance scores using regression models. Relationships between the admissions scores and cumulative GPA, cumulative MPA, and order of merit were estimated using linear regression models. The relationships between these two admissions scores and the likelihood of graduating were estimated using logistic regression models. See appendix II for more details on the results of those analyses. Issues related to alleged sexual assaults at the academies fell outside the scope of our objectives. We conducted our work from October 2002 through May 2003 in accordance with generally accepted government auditing standards. This appendix provides the results of our analyses of both admissions and performance scores for the class of 2002 at the U.S. Military Academy, the U.S. Naval Academy, and the U.S. Air Force Academy. We obtained data from all three service academies that included information on admissions scores (academic and whole person), performance scores while at the academy (cumulative academic grade point average, military performance average, and order of merit), attrition information where applicable, and various demographic characteristics for all students entering each academy in 1998. Table 10 shows the minimum, maximum and average admissions and performance scores for students at each academy. Table 11 shows graduation rates at each academy. Next, we compared the average admissions scores, performance scores, and graduation rates of the six student groups to these overall scores and rates. Tables 12, 13, and 14 show the average admission scores and the four measures of student performance for the overall sample, and for the six student groups, for each of the academies. Because we have data for the population of students in this class and there is no sampling error, the standard error of these estimates are small and differences that could be considered small in magnitude may in fact be statistically significant. In the tables below, differences that are statistically significant (p<.05) and exceed 5 percent are considered meaningful and noted, though such differences may not be practically significant when compared with class performance requirements overall. For example, at the Naval Academy the overall average academic admissions score is 618, 5 percent of 618 is about 31. Only those group average academic admissions scores that are statistically significant and more than 31 points below 618 are noted with an “a.” Differences that are greater than 10 percent are marked with a “b.” Regression models were used to assess the relationship between admission scores and performance at the three academies. We used linear regression models to examine relationships between admission scores and GPA, MPA, and order of merit. To examine the relationship between admission scores and the likelihood of graduating we used a logistic regression model. Both the academic admission score and the whole person score were included as independent variables in each model. We estimated separate regression models for each academy. The results of these regressions are shown in tables 15 and 16. The tables show both regression coefficients and standardized coefficients. In general, regression coefficients are interpreted as the predicted change in the dependent variable for every unit change in the independent variables. Here, we have scaled the admissions scores so that the regression coefficients in the table can be interpreted as the predicted change in the relevant measure of success for every 100-point increase in the academic or “whole person” admission score. For example, overall at the U.S. Air Force Academy, for every 100-point increase in the academic admission score we expect to see a 0.06 increase in GPA. For every 100- point increase in the “whole person” score, we expect to see a 0.18 increase in GPA. Both relationships are statistically significant, meaning that both the academic score and the “whole person” score are significant predictors of cumulative GPA at the academy. We cannot compare the size of these coefficients across the three academies, though, because the academic and “whole person” scores are on different scales. Because the size of the unstandardized regression coefficients is affected by the scale of the independent variables (the admissions scores), we use standardized regression coefficients to compare them. These appear in parentheses in the tables. To estimate these coefficients, all of the coefficients are standardized by dividing the regression coefficient by the ratio of the standard deviation of the success measure to standard deviation of the admission score. The standardized regression coefficients, therefore, represent the change in the measure of success for each change of one standard deviation in admission scores. Using standardized coefficients, one can conclude that the coefficient that is larger in magnitude has a greater effect on the measures of success. Using the same U.S. Air Force Academy example, we see that while the relationships between both academic and “whole person” scores and GPA are significant, the relationship between academic scores and GPA is actually a stronger one than the relationship between the “whole person” score and GPA. Overall, while the academic scores are often a better predictor of academic performance at the academies (GPA), the “whole person” scores are often better predictors of military performance (MPA). The academic admissions scores have no effect on MPA at the Military and Air Force Academies and the whole person scores, not the academic admissions scores, predict likelihood of graduating at all three academies. We also used the R statistic to estimate how much of the variation in each performance score can be explained by both academic and whole person admission scores. The admission scores explained about 30 percent of the variation in GPAs at both the Naval and Air Force Academies and about 40 percent of the variation in GPAs at the Military Academy. The admission scores explained between a quarter and a third of the variation in order of merit across the three academies. However, admission scores did not explain as much of the variation in either military performance scores or graduation rates. Therefore, while both types of admission scores are significant predictors of performance at the academy, they only explain between 7 and 40 percent of the variation in performance at the academies, and only a very small percentage of the variability in the likelihood of graduating. Other factors not studied here, such as the military training and academic environment students experience at the academies, may contribute to performance more than just students’ admissions scores do. In addition to the individual named above, Gabrielle M. Anderson, Herbert I. Dunn, Brian G. Hackett, Joseph W. Kirschbaum, Wendy M. Turenne, and Susan K. Woodward also made key contributions to this report. Military Education: DOD Needs to Align Academy Preparatory Schools’ Mission Statements with Overall Guidance and Establish Performance Goals. GAO-03-1017. Washington, D.C.: September 2003. Military Education: Student and Faculty Perceptions of Student Life at the Military Academies. GAO-03-1001. Washington, D.C.: September 2003. DOD Service Academies: Problems Limit Feasibility of Graduates Directly Entering the Reserves. GAO/NSIAD-97-89. Washington, D.C.: March 24, 1997. DOD Service Academies: Comparison of Honor and Conduct Adjudicatory Processes. GAO/NSIAD-95-49. Washington, D.C.: April 25, 1995. DOD Service Academies: Academic Review Processes. GAO/NSIAD-95-57. Washington, D.C.: April 5, 1995. DOD Service Academies: Update on Extent of Sexual Harassment. GAO/NSIAD-95-58. Washington, D.C.: March 31, 1995. Coast Guard: Cost for the Naval Academy Preparatory School and Profile of Minority Enrollment. GAO/RCED-94-131. Washington, D.C.: April 12, 1994. Military Academy: Gender and Racial Disparities. GAO/NSIAD-94-95. Washington, D.C.: March 17, 1994. DOD Service Academies: Further Efforts Needed to Eradicate Sexual Harassment. GAO/T-NSIAD-94-111. Washington, D.C.: February 3, 1994. DOD Service Academies: More Actions Needed to Eliminate Sexual Harassment. GAO/NSIAD-94-6. Washington, D.C.: January 31, 1994. Academy Preparatory Schools. GAO/NSIAD-94-56R. Washington, D.C.: October 5, 1993. Air Force Academy: Gender and Racial Disparities. GAO/NSIAD-93-244. Washington, D.C.: September 24, 1993. Military Education: Information on Service Academies and Schools. GAO/NSIAD-93-264BR. Washington, D.C.: September 22, 1993. Naval Academy: Gender and Racial Disparities. GAO/NSIAD-93-54. Washington, D.C.: April 30, 1993. DOD Service Academies: More Changes Needed to Eliminate Hazing. GAO/NSIAD-93-36. Washington, D.C.: November 16, 1992. DOD Service Academies: Status Report on Reviews of Student Treatment. GAO/T-NSIAD-92-41. Washington, D.C.: June 2, 1992. Service Academies: Historical Proportion of New Officers During Benchmark Periods. GAO/NSIAD-92-90. Washington, D.C.: March 19, 1992. DOD Service Academies: Academy Preparatory Schools Need a Clearer Mission and Better Oversight. GAO/NSIAD-92-57. Washington, D.C.: March 13, 1992. Naval Academy: Low Grades in Electrical Engineering Courses Surface Broader Issues. GAO/NSIAD-91-187. Washington, D.C.: July 22, 1991. DOD Service Academies: Improved Cost and Performance Monitoring Needed. GAO/NSIAD-91-79. Washington, D.C.: July 16, 1991. Review of the Cost and Operations of DOD’s Service Academies. GAO/T- NSIAD-90-28. Washington, D.C.: April 4, 1990. | Graduates of the service academies operated by the Army, Navy, and Air Force currently make up approximately 18 percent of the officer corps for the nation's armed services. The academies represent the military's most expensive source of new officers. The Department of Defense (DOD) pays the full cost of a student's 4-year education at the academies; and the related cost has increased over the past 4 years. Admission to the academies is highly competitive. The academies use a "whole person" method to make admission decisions. Recent studies by the Air Force raised questions about possible adverse effects of whole person admissions policies on student quality. GAO was asked to review all three service academies and specifically address the extent to which (1) DOD oversees the service academies, (2) applicants are granted waivers of academic standards, and (3) various groups of students differ in admissions scores and academy performance. The Office of the Under Secretary of Defense for Personnel and Readiness (OUSD/P&R), the services, and the academies' boards of visitors conduct considerable oversight of the academies' operations and performance, but they lack a complete oversight framework. A complete oversight framework includes performance goals and measures against which the academies' performance could be better assessed. OUSD/P&R and the services use the number and type of commissioned officers as the primary measure of academy performance. OUSD/P&R requires and receives reports on academy performance from the services. While data submitted in these reports provide perspective on current performance compared with past performance, without stated performance goals and measures, these reports do not offer OUSD/P&R or the services as good an insight into the academies performance as they could. Additionally, though the academy boards of visitors serve as an external oversight mechanism to focus attention on a wide range of issues, they also do not assess the academies' performance against established performance goals and measures. The academies do not grant waivers from academic criteria or have absolute minimum scores for admission. However, under the whole person approach, the academies can admit some applicants whose academic scores are lower than might normally be competitive for admission, but who in their totality (academics, physical aptitude, and leadership) are evaluated by academy officials as being capable of succeeding at the academy. In our review of the academy classes that started in 1998 (class of 2002), we found that despite differences among various groups of students in their admissions scores and similar differences in their performance while at the academies, the differences in performance were not sizable. Some groups, such as females, performed better in some categories than the class as a whole and worse in others. Some groups (minorities, preparatory school graduates, recruited athletes, and students in the lower 30 percent of their class in terms of academic admissions scores) performed at lower levels on average in all categories than the class as a whole. |
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to assist the Subcommittee in its review of the Commodity Futures Trading Commission’s (CFTC) fiscal year 2000 annual performance plan. Hearings like this one and the one you held last year on CFTC’s 1997-2002 strategic plan continue to be an important means of ensuring that the intent of the Government Performance and Results Act of 1993 (Results Act) is met. As you know, annual performance plans can be an invaluable tool for making policy decisions, improving program management, enhancing accountability, and communicating to both internal and external audiences on how the long-term direction outlined in strategic plans is translated into the day-to-day activities of managers and staff. Successful implementation of a performance-based management system, as envisioned by the Results Act, represents a significant challenge requiring sustained agency attention. My testimony today focuses on five areas in which CFTC could improve its performance plan to make it a more useful tool for congressional and executive branch decisionmakers. Although opportunities exist to improve CFTC’s fiscal year 2000 performance plan, CFTC actions to date clearly show a good faith effort to comply with the Results Act and the Office of Management and Budget (OMB) guidance in developing its plan. In our discussions with CFTC staff, we found CFTC fully committed to meeting both the requirements of the Act and congressional expectations that the plan inform Congress and the public about CFTC performance goals, including how the agency will accomplish these goals and measure results. In addition, the areas in which CFTC could improve its plan are some of the same areas in which we found that many other federal agencies, including federal financial regulators, could improve their plans. Specifically, CFTC could improve its plan in the following five areas: Performance goals, measures, and targets could provide a clearer picture of intended performance. Mission, goals, and activities could be better connected to more fully demonstrate how CFTC will chart annual progress toward achieving its long-term strategic goals. Crosscutting efforts could be addressed more fully if CFTC worked with the affected federal agencies to develop performance goals and measures that reflect the nature and extent of their common efforts. Strategies and resources used to achieve goals could be discussed in greater detail to better enable congressional and other decisionmakers to judge their reasonableness. The means for verifying and validating that performance information is sufficiently complete, accurate, and consistent, as well as the extent to which such information and the means for collecting, maintaining, and analyzing it are reliable, should be discussed. My comments today apply to the fiscal year 2000 annual performance plan that CFTC prepared for OMB in September 1998. Our assessment of CFTC’s plan was based on knowledge of the agency’s operations and programs; past reviews of CFTC, including a review of its 1997-2002 strategic plan; results of work on other agencies’ performance plans and the Results Act; discussions with CFTC staff; and other information available at the time of our assessment. The criteria we used to determine whether CFTC’s plan complied with the requirements of the Results Act were the Results Act itself; OMB guidance on preparing strategic and performance plans (OMB Circular A-11, Part 2); and GAO guidance on assessing agency performance plans. CFTC, an independent agency created by Congress in 1974, administers the Commodity Exchange Act (CEA), as amended. The principal purposes of the CEA are to protect the public interest in the proper functioning of the market’s price discovery and risk-shifting functions. In administering the CEA, CFTC is responsible for fostering the economic utility of the futures market by encouraging its efficiency, monitoring its integrity, and protecting market participants from abusive trade practices and fraud. prepared—to a focus on the results of those activities, such as protecting the economic functioning of the commodity futures and option markets. Under the Results Act, strategic plans are the starting point for setting goals and measuring progress towards them. The Results Act requires virtually every executive agency to develop a strategic plan, covering a period of at least 5 years forward from the fiscal year in which the plan is submitted. In September 1997, CFTC formally submitted its fiscal years 1997-2002 strategic plan to Congress and OMB. This plan established three strategic goals: (1) protect the economic functions of the commodity futures and option markets; (2) protect market users and the public; and (3) foster open, competitive, and financially sound markets. The Results Act also requires a federal agency to prepare an annual performance plan covering the program activities set out in its budget. In establishing the requirement for a performance plan, the Results Act establishes the first statutory link between an agency’s budget request and its performance planning efforts. The performance plan is to reinforce the connections between the long-term strategic goals outlined in the agency’s strategic plan and the daily activities of program managers and staff. Finally, the Results Act requires executive agencies to prepare annual reports on program performance for the previous fiscal year. The performance reports are to be issued by March 31 each year, with the first (for fiscal year 1999) to be issued by March 31, 2000. In each report, the agency is to compare its performance against its goals, summarize findings of program evaluations completed during the year, and describe the actions needed to address any unmet goals. results-oriented, and goals were provided for all measures and internal management challenges; (2) some performance goals were made self-measuring and others were made more objective; (3) certain performance measures were replaced, restated, or deleted; and (4) baselines were established against which annual targets could be compared. Results-oriented, or outcome, goals and measures provide the clearest picture of intended and actual performance. However, most of CFTC’s performance goals and measures focus on program outputs—such as the number of meetings attended and number of research projects or reports completed. In our testimony before this Subcommittee last year, we highlighted a similar problem with CFTC’s strategic plan. Although we recognize that establishing outcome measures is particularly challenging for regulatory agencies as they move from a focus on the activities they undertake to the results they are trying to achieve, a key shortcoming of CFTC’s performance plan is that it relies on output measures that describe completed activities, not program results. Also, these measures are weighted toward measuring the quantity of completed activities, rather than the quality, cost, or timeliness of performance outcomes. As mentioned earlier, the Results Act is intended to shift the focus of government decisionmaking and accountability away from a preoccupation with completed activities to a focus on the results of such activities. The focus of CFTC’s performance plan on output measures appears to flow from its strategy of deriving performance goals from program activities. For example, one performance goal is to aggressively identify, investigate, and take action against individuals engaged in fraudulent Internet and media activities. This goal is associated with the program activity of monitoring the Internet and other media for fraudulent activities and other possible violations of the CEA. The measure for the goal is the number of referrals to enforcement authorities generated from Internet and media monitoring—an output of the activity, not an outcome of a program. results-oriented performance goals. For example, the Enforcement Program accomplishment section discusses the effectiveness of its quick-strike ability—the ability to file injunctive actions quickly after detecting fraud—to, among other things, obtain timely injunctive relief and enhance the possibility that customer funds will be recovered. This accomplishment section describes cases, filed within days or weeks of CFTC’s discovering an illegal activity, that stopped fraud at an early stage and preserved customer funds. CFTC’s performance measures could be made more results-oriented by replacing measures, such as reports on activities related to bringing injunctive actions and sanctioning violators, with more outcome-related measures, such as the percentage of quick-strike cases filed within a certain number of days of starting an investigation that resulted in sanctions and the percentage of funds recovered. CFTC could also learn from the plans of other federal financial regulators that are attempting the transition to results-oriented goals. For example, the National Credit Union Administration is developing new outcome performance goals. One outcome goal of the National Credit Union Administration is to ensure that federally insured credit unions are adequately capitalized. A performance goal is to reduce the percentage of federally insured credit unions that are undercapitalized by 10 percent, from 372 to 335. CFTC’s plan could also be improved if performance goals were provided for all activities and performance targets as well as for internal management challenges. Currently, the plan has 16 activities for which no performance goals exist. For example, no performance goal exists for the activity of reviewing and overseeing self-regulatory organization audit and financial practices. Without a performance goal, it is not clear what performance is expected. Also, CFTC’s strategic plan identifies several internal management challenges that the performance plan does not address. These challenges include diminishing resources, recruiting and retaining qualified professionals, remaining abreast of current technology, and remaining educated and informed as innovation changes the industry. To better respond to the intent of the Results Act, CFTC could add agencywide performance goals to the plan to address these challenges or incorporate these challenges in existing performance goals and measures. Although not required by the Results Act, CFTC could redefine some performance goals so that they are self-measuring, thereby reducing the complexity of the plan. Currently, all but 2 of CFTC’s 31 performance goals are stated as abstract goals—that is, as goals requiring that specific performance measures be defined to assess progress toward their achievement. Performance goals that can be redefined so that they are self-measuring generally have one measure or two or more measures that can be combined. For example, the performance goal for the activity of reviewing the adequacy of self-regulatory organization disciplinary actions might be restated in the following way: On an annual basis, review a certain percentage of self-regulatory organization disciplinary actions to ensure compliance with CFTC standards. This approach, which has been taken by other federal financial regulators, such as the National Credit Union Administration and Federal Deposit Insurance Corporation, clearly defines performance expectations. For example, the Federal Deposit Insurance Corporation has the following performance goal: Market 80 percent of a failing financial institution’s assets based on book value at the time of resolution or within 90 days. CFTC’s performance plan could also be improved by reducing the extent to which performance goals require interpretation. To the extent possible, goals should not require subjective considerations to dominate measurement. For example, one performance goal is defined as follows: Bring important cases (including matters involving ongoing conduct and complex transactions) aggressively. The performance goal does not define what an important case is or what it means to aggressively bring a case. CFTC’s performance plan has 31 performance goals with 228 performance measures to address them. CFTC could replace, restate, or delete performance measures for certain performance goals to strike a better balance between too few and too many measures and to enhance its ability to assess the progress made in achieving performance goals. measure captures the amount of time it takes to process applications or changes, they could be replaced with one measure that captures the percentage of applications and changes processed in 45 days or less. Second, CFTC could restate or delete certain performance measures, because they do not appear to be clearly related to their performance goals and/or appear to be unduly affected by external factors. For example, the number of active futures and option markets is used to measure, in part, two goals: (1) identify traders who can influence futures prices and (2) determine whether traders are influencing futures prices. The number of active futures markets is determined by futures exchanges and other external factors and has little direct bearing on the two goals. As a result, the measure could be deleted. Third, CFTC could restate or delete certain performance measures, because they may have limitations that preclude them from accurately capturing intended performance and may promote unintended consequences. For example, one of the plan’s performance goals is to conduct important investigations and refer potential violations to other authorities as appropriate. The performance measures for this goal include the number of documents obtained through subpoena or inspection, number of witnesses from whom testimony was taken, and number of witnesses interviewed. These output-oriented measures could provide an incentive for staff to conduct more interviews, take more testimonies, and obtain more documents than necessary, which could add cost and time to investigations without necessarily contributing commensurately to their success. Although CFTC’s performance plan includes annual targets for performance goals, it could make such information more useful by providing baselines, or a context, for assessing the reasonableness and appropriateness of such targets. As we recently reported, agencies that go beyond the requirements of the Results Act and include baseline or trend data for their goals provide a more informative basis for assessing expected performance. year covered by the plan for the number of cease and desist orders, registration sanctions, and trading prohibitions. However, the basis for setting the specific targets and the contributions of these targets to the outcome objective are not readily apparent. Without this contextual information, the reader does not know if CFTC’s output-oriented performance targets are reasonable. Consistent with the Results Act, CFTC’s performance plan attempts to show the relationship between the agency’s annual performance goals and its fiscal years 1997 through 2002 mission and strategic goals. To do so, the plan uses tables that connect each strategic goal to its accompanying set of performance goals, measures, and annual targets. The plan also ranks the agency’s outcome objectives by dollars budgeted, which is a starting point for providing useful information about its priorities. However, the plan could better connect mission, goals, and activities by more fully demonstrating how CFTC will chart annual progress toward achieving its long-term strategic goals. As we found with other agency performance plans, CFTC’s plan associates one performance goal with multiple program activities and strategic goals. Such associations make it difficult to determine whether all activities are substantially covered or to understand how specific program activities are intended to contribute to CFTC’s strategic goals. For example, the performance goal—assess sanctions that are remedial and deter violators—is associated with three different program activities and three different strategic goals. Moreover, the measures and targets for this performance goal differ with each program activity and strategic goal. Similarly, the performance plan’s presentation of many separate program areas, program activities, and performance goals, measures, and targets makes it difficult to link CFTC’s mission and strategic goals to performance goals across the entire agency. The plan’s 31 performance goals and 228 performance measures support 3 strategic goals and 9 strategic objectives, covering 5 program areas. Although the plan’s ranking of outcome objectives offers a useful perspective on CFTC’s priorities, given the high level of complexity, it is difficult to (1) identify the agency’s key priorities among the many goals and measures, (2) differentiate efforts to meet these priorities, and (3) understand what will be achieved if all the performance goals are met. The Results Act seeks to ensure that crosscutting goals of federal programs are consistent; strategies are mutually reinforcing; and, as appropriate, progress is assessed through the use of common performance measures. OMB guidance tasks performance plans with identifying those performance goals that are being mutually undertaken with other federal agencies in support of programs or activities of a crosscutting nature.CFTC’s plan recognizes the need to address crosscutting efforts. However, it could more fully address such efforts if the agency worked with the cognizant federal agencies to develop performance goals and measures that better reflect the nature and extent of their common efforts. CFTC’s performance plan could be expanded to include performance goals and measures to more adequately address crosscutting efforts, such as those identified in its budget justification that accompanied its performance plan to OMB. These include CFTC’s participating in the President’s Working Group on Financial Markets; sharing information with other financial regulators; working with the U.S. Department of Agriculture on a risk management education program; contributing to a Department of the Treasury initiative that encourages global financial stability; as well as cooperative enforcement efforts with the Department of Justice, the Federal Bureau of Investigation, the Federal Reserve Board, the Federal Trade Commission, the Securities and Exchange Commission, and the U.S. Postal Inspection Service. CFTC’s performance plan briefly discusses CFTC’s need to work with other U.S. financial regulators through, among other means, the President’s Working Group on Financial Markets. However, the related performance goals and measures do not directly address the type of crosscutting performance that the Working Group was created to address. For example, CFTC’s performance goal covering the Working Group is for CFTC to contribute to the performance of the group. The measure for this goal is the number of meetings attended, and the fiscal year 2000 target is two meetings. As supported by OMB guidance, CFTC could strengthen its performance plan by participating with the members of the Working Group and other federal regulators involved in crosscutting programs to develop common performance goals and measures. For example, the continued growth and development of the over-the-counter (off-exchange) derivatives market has raised a number of potential regulatory concerns that affect CFTC and other members of the Working Group. In addition, the potential need to develop a financial markets contingency plan to address the “Year 2000” computer dating problem could involve coordination among CFTC and other federal financial regulators. CFTC’s plan provides important information on how strategies and resources will be used to achieve goals. However, expanding this discussion could better enable congressional and other decisionmakers to judge the reasonableness of CFTC’s strategies and anticipated resource deployment. Consistent with the Results Act and OMB guidance, CFTC’s performance plan attempts to address the strategies that CFTC will use to achieve its performance goals. Although this discussion should cover operational processes, skills, and technologies, CFTC’s discussion focuses on the agency’s operational processes and, to a much lesser extent, on skills and technologies. Aligning its discussion with CFTC’s strategic goals, the plan briefly describes the major activities of each program in relation to its performance goals and measures. In a few cases, the plan also discusses skills or technologies that programs will use in relation to performance goals and measures. The plan could be made more useful by providing additional information on the skills and, if appropriate, technologies used in connection with operational processes to achieve program goals. Also, consistent with the Results Act, CFTC’s performance plan discusses the resources that will be applied to achieve the agency’s performance goals. Using tables and graphics, the plan shows the amount of budget funding and the number of full-time equivalent employees that will be needed to achieve each strategic goal and the individual outcome objectives covering each strategic goal. CFTC’s plan could be further improved by describing the resources required to achieve each performance goal. Although required by the Results Act and OMB guidance, CFTC’s fiscal year 2000 performance plan does not describe the procedures that the agency will use to verify and validate that performance information is sufficiently complete, accurate, and consistent. Nor does the plan discuss the extent to which the performance information and the means for collecting, maintaining, and analyzing it are reliable. CFTC’s performance plan should be expanded to address these requirements. In addition, to assess progress toward achieving its goals, CFTC will need to collect information on the over 200 performance measures in its plan. For some of these measures, the amount of information to be collected is voluminous and covers activities across CFTC headquarters and regional offices. Errors can occur in collecting, maintaining, processing, and reporting such information—potentially introducing bias and resulting in inaccurate estimates of program performance. As a result, CFTC should have procedures for ensuring that its performance information is free of significant levels of error and that bias is not introduced. Such procedures can include internal controls over data collection, maintenance, and entry, as well as audits, evaluations, and peer reviews. In summary, Mr. Chairman, it is important to recognize that although CFTC’s performance planning can be further improved, the Results Act anticipated that the process of developing an effective planning process and plans could take several planning cycles. We look forward to continuing to work with Congress and CFTC to ensure that the requirements of the Results Act are met. Mr. Chairman, this concludes my prepared statement. My colleagues and I would be pleased to answer any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Commodity Futures Trading Commission's (CFTC) fiscal year (FY) 2000 annual performance plan, focusing on five areas in which CFTC could improve its performance plan. GAO noted that: (1) the annual performance plans can be an invaluable tool for making policy decisions, improving program management, enhancing accountability, and communicating to both internal and external audiences on how the long-term strategic directions outlined in strategic plans are translated into the day-to-day activities of managers and staff; (2) successful implementation of a performance-based management system, as envisioned by the Government Performance and Results Act, represents a significant challenge requiring sustained agency attention; (3) while opportunities exist to improve CFTC's Year 2000 performance plan, actions to date clearly show a good faith effort by CFTC to comply with the Results Act and the Office of Management and Budget (OMB) guidance in developing its plan; (4) in GAO's discussions with CFTC staff, it found CFTC fully committed to meeting both the requirements of the Act and congressional expectations that the plan inform Congress and the public about CFTC performance goals, including how the agency will accomplish these goals and measure results; (5) in addition, the areas in which CFTC could improve its plan are the same areas in which GAO found that many other federal agencies, including federal financial regulators, could improve their plans; and (6) specifically, CFTC could improve its plan in the following five areas: (a) performance goals, measures, and targets could provide a clearer picture of intended performance; (b) mission, goals, and activities could be better connected to more fully demonstrate how CFTC will chart annual progress toward achieving its long-term strategic goals; (c) crosscutting efforts could be addressed more fully if CFTC worked with the affected federal agencies to develop performance goals and measures that reflect the nature and extent of their common efforts; (d) strategies and resources used to achieve goals could be discussed in greater detail to better enable congressional and other decisionmakers to judge their reasonableness; and (e) the means for verifying and validating that performance information is sufficiently complete, accurate, and consistent as well as the extent to which such information and the means for collecting, maintaining, and analyzing it are reliable should be discussed. |
Los Angeles has one of the largest federal court operations in the nation, processing more than 16,000 cases per year and serving an area with more than 11 million people. In downtown Los Angeles, the District Court operations are split between two buildings—the Spring Street Courthouse and the Roybal Federal Building—that are approximately one-quarter mile apart. The Spring Street building, considered by the court to be the main courthouse in Los Angeles, is more than 65 years old and, according to judiciary and GSA officials, requires major renovations and does not currently meet the security or space needs of the judiciary. By contrast, the Roybal building was constructed in the early 1990s and, according to GSA officials, complied with design and security specifications that were in place at the time it was built. However, inefficiencies occur because the court’s operations are split between these two buildings. Federal courthouse construction projects are prioritized based on urgency scores assigned by the judiciary—the higher the score, the more urgent the project is considered (see table 1). The Los Angeles court has the highest urgency score of any project in the 5- year plan due to the space, security, and operational inefficiencies presented by the Spring Street Courthouse. To address these concerns, GSA and the judiciary prepared a series of feasibility studies looking at different options for accommodating the court’s long-term needs. One option involved constructing a stand-alone building that would consolidate all of the court operations into a single building. GSA and the judiciary also considered constructing a companion building physically connected to the Roybal building. A third alternative that was studied involved the partial or complete demolition of an existing federal building to provide a site for a new courthouse. According to judiciary and GSA officials, after years of study and debate, these options were not selected because of cost or space limitations. For example, AOUSC noted that a consolidated courthouse would cost approximately $480 million. Currently, GSA is proposing the construction of a new 41-courtroom building, as shown in figure 1, to house district court judges and related operations at a location approximately 6/10 of a mile from the Roybal building. Under this proposal, the judiciary would expand its use of the Roybal building for magistrate and bankruptcy judges and related operations. GSA’s plan also involves consolidating the U.S. Attorneys Office in the Spring Street building, along with other federal agencies and grand jury suites. The briefing slides in appendix I also contain a map showing the locations of these sites. GSA estimates that constructing the new courthouse will cost approximately $400 million. Funding for this project is contingent on multiple appropriations. In fiscal year 2000, the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure authorized site acquisition and design of the proposed courthouse, and in the following fiscal year Congress appropriated $35 million for this purpose. In fiscal year 2004, the House Committee on Transportation and Infrastructure authorized additional design and construction of the proposed courthouse in Los Angeles. In that same fiscal year, Congress appropriated $50 million for the project and appropriated $314 million in fiscal year 2005. On November 17, 2004, the Senate Committee on Environment and Public Works also authorized the construction of the new courthouse in Los Angeles. The current project proposal would address the judiciary’s need for more space and alleviate some security concerns, but the operational and security concerns related to a split court that contributed to the L.A. Courthouse’s high urgency score would remain. More specifically, while Los Angeles’s Spring Street Courthouse received a total score of 85 out of a possible 100 points, making it the most urgent project in the judiciary’s 5- year plan, 50 of these points were related to the trial court being split into two buildings, a situation that the new project would not resolve. The L.A. Courthouse on Spring Street received high scores in all four criteria that the judiciary considers in assigning an urgency score (see fig. 2). Because the L.A. Courthouse ran out of space in 1995, the judiciary assigned the courthouse a score of 19.5 points using its urgency scoring methodology. In addition, court officials projected that seven judges would not have their own courtrooms within 10 years, resulting in 10.5 points for number of judges without courtrooms. The Spring Street building also received the maximum possible scores for security concerns and operational inefficiencies (30 and 25 points, respectively) because the trial court is split between two separate buildings and, according to the judiciary, the Spring Street building lacks a sufficient number of holding cells for prisoners. According to judiciary officials, it is also difficult to keep prisoners separate from judges and the public in the hallways. To address this last problem, the courthouse has colored, numbered lines designed to guide the U.S. Marshals as they lead prisoners from the detention cells to the courtrooms (see fig. 3). However, court officials said that this system is too confusing and difficult to follow through the narrow halls. Furthermore, many of the building’s courtrooms are less than half the size required under the U.S. Courts Design Guide or have major visual obstructions. The current proposal—constructing a new courthouse and expanding the judiciary’s use of the Roybal building—addresses some of the conditions that led to the high urgency score. For example, it addresses the judiciary’s space constraints by providing additional courtrooms—sized to meet the Design Guide standards—to accommodate the 47 current district and magistrate judges and the 14 additional judges expected by 2011. According to GSA officials, there is also room to build an additional district judge courtroom in the new building and additional magistrate judge courtrooms in the Roybal building to address the judiciary’s projected 30-year needs. In addition, the proposal addresses some of the more serious security and operational inefficiencies associated with the Spring Street building, such as providing additional prisoner holding cells, secure prisoner elevators, and separate, secured hallways for prisoners, judges, and the public. Marshals Service officials also told us that a split court would be acceptable from a security standpoint, provided the Marshals Service security standards are followed. In addition, the court would receive the operational benefits of a new building, and under the current proposal, avoid the major structural deficiencies of using the 66-year-old Spring Street building as a courthouse. For example, according to the judiciary and GSA, the Spring Street building has outdated electrical and plumbing systems and requires a seismic retrofit to meet GSA’s standards. In contrast, the Roybal Federal Building, which was constructed in the early 1990s, was designed to meet modern operational and security requirements. For example, it is connected to the Metropolitan Detention Center, which houses federal prisoners prior to arraignment and trial, via a secure underground passageway, so that prisoners do not have to be led through public areas on their way to and from the Roybal building cell block. The current proposal’s major limitation is that it would still result in a split court, even though consolidating the district court into a single building was one of the main priorities in the judiciary’s most recent long-range plan for Los Angeles, published in 1996. Operational and security concerns stemming from a split court led to 50 of the 85 points in the Spring Street Courthouse’s urgency score. For example, the building received the maximum possible security score (30 points) because the trial court was split between two buildings—the Roybal building and Spring Street Courthouse. With the court still split between buildings under the current proposal, related operational inefficiencies and security concerns would remain. According to AOUSC and Marshals Service officials, operational inefficiencies would include the need to continue to transport judges, prisoners, and evidence between buildings; confusion among jurors and attorneys over which facility they should report to; and possible delays, misrouting, and loss of time-sensitive documents (such as restraining orders) as they flow between buildings. A split court would also require duplication of several offices and activities. For example, Marshals Service officials said that a split court would require them to replicate much of their security equipment and contract guards to operate the equipment and protect each building. We noted during our review that the judiciary refined its urgency scoring methodology in March 2002 and gave less weight to split court factors. In the judiciary’s current 5-year plan, 26 projects are scored under the original methodology and 31 are scored under the refined methodology. The L.A. Courthouse was scored under the original methodology and has not officially been rescored. As a result, we use the original methodology to discuss the L.A. Courthouse’s urgency score in this report. In September 2004, the Judicial Conference adopted a 2-year moratorium on 42 courthouse construction projects currently listed on the judiciary's 5-year plan. During this moratorium period, AOUSC officials said that they plan to re-evaluate the urgency scoring methodology as part of a larger review of the design guide standards and the courthouse construction planning process. To meet the long-term judiciary and related needs in Los Angeles, the federal government will likely incur additional construction and operational costs beyond the estimated $400 million for the new courthouse. These funds are designated for costs associated with the proposed courthouse, including the site acquisition and the design and construction costs. However, GSA recognizes that in recent years other courthouse construction projects have had cost escalations. Cost escalations may occur because of planning or design problems, such as changes in the scope or specific design elements in a project, or they may be the result of changes outside of the control of the planners, such as increases in the cost of labor or particular construction materials, such as steel. GSA has initiated actions intended to mitigate this problem, including improving the design modeling process and more closely reviewing project changes during construction. Nevertheless, GSA acknowledges that a potential still exists for all courthouse projects, including the L.A. Courthouse, to incur future escalation in construction costs. In addition to construction costs for the new courthouse, GSA has indicated that additional funds will be needed for construction related to the long-term space needs of the judiciary and other related agencies in Los Angeles. Preliminary estimates from GSA show that these additional costs may exceed $100 million. Specifically: To accommodate the anticipated need for additional magistrate judge courtrooms, GSA told us that it will need to build four additional magistrate courtrooms in the Roybal building to increase the total number of magistrate courtrooms from 16 to 20. GSA has estimated the cost of this renovation to be approximately $10 million. Once the District Court moves out of the Spring Street Courthouse and into the new courthouse, GSA said that it will need to renovate the Spring Street building to convert courtrooms into office space for U.S. Attorneys and other federal agencies. The costs for this project are not currently known, but a 1997 GSA study estimated the cost to be approximately $77 million in 2003 dollars. However, according to GSA, the Spring Street building will require major renovations, whether the judiciary or other federal agencies use it. GSA estimates the costs associated with future expansion in the Roybal building and the new courthouse needed to meet expected judiciary space needs by 2031 to be $21 million. According to GSA, this expansion, if necessary, would involve constructing six additional magistrate courtrooms and judges’ chambers in the Roybal building and one district courtroom and judge’s chambers in the proposed new courthouse. GSA and judiciary officials have also told us that there will likely be additional operational costs associated with constructing a new courthouse, although the extent of these costs is currently unknown. These officials indicated that there will be moving expenses for the judiciary to relocate to the new courthouse as well as to place all the magistrate judges in the Roybal building. According to GSA officials, the judiciary may also need to lease offsite parking spaces to accommodate court needs, although the total number of parking spaces needed, if any, is unknown at this time. In addition, in order to accommodate additional magistrate courtrooms in the Roybal building, GSA officials indicated that there may be a need to relocate some of the existing federal tenants to leased space or to another federal building in downtown Los Angeles. Judiciary officials in Los Angeles also expressed concerns about additional operational costs that would be incurred as the result of a split court. According to the judiciary, some of the office space and/or staff that would be duplicated in both the new courthouse and the Roybal building include the clerk’s office, pretrial services, jury assembly, Marshals Service, and the U.S. Attorneys Office. The additional costs associated with duplicating these offices are unknown at this time because a larger staff and more equipment would be necessary in a consolidated courthouse due to its larger size. However, judiciary officials also acknowledge that a split court would result in higher costs due to operational inefficiencies, including additional travel time between buildings for movement of staff, evidence, and prisoners. We provided AOUSC, GSA, and the Department of Justice with draft copies of this report for their review and comment. AOUSC and GSA provided technical clarifications, which were incorporated as appropriate. The Marshals Service, which is part of the Department of Justice, said that it did not have any comments on the draft. We are providing copies of this report to the appropriate congressional committees, AOUSC, GSA, and the Marshals Service. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-2834, or at goldsteinm@gao.gov, or David Sausville, Assistant Director, on (202) 512-5403, or sausvilled@gao.gov. Other contributors to this report were Keith Cunningham, Jessica Lucas-Judy, Susan Michal-Smith, Alwynne Wilbur, and Dorothy Yee. Proposed Los Angeles Courthouse Project Committee on Environment and Public Works Subcommittee on Transportation and Infrastructure September 23, 2004 Los Angeles has one of the largest federal court operations in the nation, processing approximately 16,000 cases per year and serving an area with more than 11 million people. The U.S. District Court in Los Angeles is ranked as the highest priority project in the judiciary’s 5-year construction plan1 based on its high urgency score—a measure of a court’s space, security, judges impacted, and operational deficiencies. The Los Angeles courthouse project could be one of the most expensive projects in the federal government’s multi- billion dollar courthouse construction program. The judiciary uses its 5-year plan to prioritize requests for new courthouse projects to Congress and to GSA, the federal government’s central agency for real property operations. Introduction (continued) the growth of the court, the inefficiencies caused by operating a split court,2 and the fact that the Spring Street building is 66 years old—it requires major renovations and does not meet today’s security needs. According to the judiciary’s plan, one of the court’s main priorities in Los Angeles was to consolidate district court operations (i.e., district judges, magistrate judges, and the district court clerk’s office) into one building. Split court refers to a court that has functions housed in multiple buildings in a city. Because of the project’s significance, GAO was asked: 1. To what extent does the current Los Angeles courthouse project proposal address the underlying conditions that led to Los Angeles’s high urgency score? 2. What construction and other costs, if any, may be required to meet judiciary and related needs in Los Angeles? Inspected the current and planned sites for the U.S. District Court—Central District of California, Los Angeles. Interviewed judges and officials from the U.S. District Court—Central District of California, Los Angeles; and officials from the Administrative Office of the U.S. Courts (AOUSC), General Services Administration (GSA), and U.S. Marshals Service (USMS). Reviewed key documents, including urgency score criteria, planning studies, prospectuses and other budget data. Conducted our work in Los Angeles, CA; and Washington, D.C.; from June through September, 2004, in accordance with generally accepted government auditing standards. GSA’s current proposal to construct a new building, while continuing to use the existing Roybal Building, would address the judiciary’s need for space and alleviate some security concerns. However, the operational and security concerns related to a split court that resulted in a high urgency score would remain. To meet the long-term judiciary and related needs in Los Angeles, the government will likely incur significant construction and operational costs beyond the estimated $400 million for the new courthouse. Preliminary estimates show that these additional costs may exceed $100 million. court judges and related operations, Retaining the use of the Roybal Federal Building for magistrate and bankruptcy judges and related operations, and Consolidating the U.S. Attorneys Office in the Spring St. building,3 along with other federal agencies and grand jury suites. The U.S. Attorneys Office is related to the judiciary because it is integral to the operations of the U.S. District Court, but is part of the U.S. Department of Justice. GSA estimates the new building will cost about $400 $35 million was authorized in 2000 and then appropriated in fiscal year 2001 for site acquisition and design. $50 million was appropriated in fiscal year 2004 and authorized by the House authorizing committee, but GSA said that it has not been authorized by the Senate. $314 million was proposed in the President’s budget, included in the fiscal year 2005 House & Senate appropriations bills,5 and authorized by the House authorizing committee. This amount includes construction, site acquisition, design, and management inspection. H.R. 5025 and S. 2806, 108th Congress. The current project proposal would address the judiciary’s need for space and alleviate some security concerns, but the operational and security concerns related to a split court that contributed to the Los Angeles Court’s high urgency score, would remain. The year in which the building was or is projected to be completely occupied by the district court and related components, as documented in the judiciary’s long-range facilities plan or as determined by the Circuit Judicial Council. Measures the number of judicial officers who currently do not have courtrooms or who are projected not to have them over the next 10 years. Includes whether the trial court is split into separate facilities, whether there is a secure prisoner drop-off, and whether there are separate walkways and elevators for prisoners, judges, and the public. Includes physical building conditions—such as inefficiently designed courtrooms with visual obstructions or operations that are split among locations—that cause significant disruptions to court operations. Urgency Score for Los Angeles Court’s The Spring St. Courthouse has a total score of 85 out of 100, which is the highest score of any of the projects in the judiciary’s 5-year plan. constraints by providing enough courtrooms for current judges and those expected by 2011, with room to expand to accommodate six additional magistrate judge courtrooms and one additional district judge courtroom. USMS6 officials said that a split court, although not ideal, would be acceptable from a security standpoint if its design manuals are followed. For example, the new building would provide more secure judge and prisoner circulation patterns and increase the number of holding cells. The court would also receive the operational benefits of a new building, avoiding major structural deficiencies (e.g., seismic vulnerability and old electrical systems). USMS provides security for the federal judiciary, including courthouses, and prisoner transport. between two buildings, even though consolidating the district court into one building was one of the main priorities identified in the judiciary’s plan for Los Angeles. According to the judiciary and the USMS, a split court causes major operational inefficiencies. Judges, prisoners, and evidence would need to be transported between buildings, and Many offices and activities would likely be duplicated. the Los Angeles Court received under the judiciary’s urgency scoring methodology. (The split court accounted for all 30 points for security concerns and 20 of the 25 for operational considerations.) To meet long-term judiciary and related needs in Los Angeles, the government will likely incur additional construction and operational costs beyond the estimated $400 million for the new courthouse. The extent of these costs is unknown, but preliminary estimates show that they may exceed $100 million. $400 million is designated for the site acquisition, design, and construction costs related to the proposed courthouse. On all courthouse construction projects, including Los Angeles, there is a potential for future escalation in costs due to design and planning changes during the construction process. According to GSA, cost escalations and scope changes for courthouse projects have been a nationwide concern in recent years, although GSA has initiated actions intended to address this problem. Renovation of Roybal building to accommodate 4 additional magistrate judge courtrooms. $10 million. Renovation of Spring St. Courthouse into office space for U.S. Attorneys and others. Costs unknown at this time. (A 1997 GSA study estimated costs of $77 million in 2003 dollars.) Future expansion in Roybal and new courthouse to meet judiciary needs by 2031. $21 million. 40 courtrooms at $10,000 per courtroom, and $3.00 - $3.50 per square foot for office space. Leased parking to accommodate judiciary needs at new building. $180 per space per month. (Total number needed, if any, is unknown at this time.) Relocation of existing federal tenants in the Roybal building. Costs unknown at this time. Redundant court offices and staff in the new courthouse and the Roybal building. Costs unknown at this time. probation office, five other court and related offices would require staff and/or offices in both the new courthouse and Roybal. These five offices include: 1. Clerk’s Office 4. U.S. Attorneys Office The total costs associated with duplicating these offices are unknown at this time. Although the current proposal addresses the judiciary’s space needs, the security and operational concerns that led to Los Angeles’s high urgency score will remain and GSA is likely to need significant additional funding to fully address judiciary and related needs in Los Angeles. General Services Administration: Factors Affecting the Construction and Operating Costs of Federal Buildings. GAO-03-609T. Washington, D.C.: April 4, 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 1, 2003. Courthouse Construction: Information on Courtroom Sharing. GAO-02- 341. Washington, D.C.: April 12, 2002. Courthouse Construction: Sufficient Data and Analysis Would Help Resolve the Courtroom-Sharing Issue. GAO-01-70. Washington, D.C.: December 14, 2000. Courthouse Construction: Better Courtroom Use Data Could Enhance Facility Planning and Decisionmaking. GAO/GGD-97-39. Washington, D.C.: May 19, 1997. Courthouse Construction: Information on the Use of District Courtrooms at Selected Locations. GAO/GGD-97-59R. Washington, D.C.: May 19, 1997. Courthouse Construction: Improved 5-Year Plan Could Promote More Informed Decisionmaking. GAO/GGD-97-27. Washington, D.C.: December 31, 1996. Federal Courthouse Construction: More Disciplined Approach Would Reduce Costs and Provide for Better Decisionmaking. GAO/T-GGD-96-19. Washington, D.C.: November 8, 1995. General Services Administration: Better Data and Oversight Needed to Improve Construction Management. GAO/GGD-94-145. Washington, D.C.: June 27, 1994. Federal Judiciary Space: Progress is Being Made to Improve the Long- Range Planning Process. GAO/T-GGD-94-146. Washington, D.C.: May 4, 1994. Federal Judiciary Space: Long-Range Planning Process Needs Revision. GAO/GGD-93-132. Washington, D.C.: September 28, 1993. New L.A. Federal Courthouse: Evidence is Insufficient to Suggest that Congress Reconsider Its Approval. GAO/GGD-88-43BR. Washington, D.C.: March 23, 1988. | Since the early 1990s, the General Services Administration (GSA) and the federal judiciary have been carrying out a multibillion dollar courthouse construction initiative to address the judiciary's growing space needs. To plan for and make funding decisions on projects, Congress, the Office of Management and Budget, and GSA have relied on a rolling 5-year plan prepared annually by the judiciary that prioritizes new courthouse projects based on an urgency score. The urgency score is based on the year a courthouse runs out of space, the number of judges without courtrooms, security concerns, and operational inefficiencies. In recent years, the L.A. courthouse had the highest urgency score in the judiciary's 5-year plan. At a cost of approximately $400 million, the new courthouse is expected to be one of the most expensive projects in the federal government's courthouse construction program to date. In light of the project's significance, GAO was asked: (1) To what extent does GSA's current L.A. courthouse project proposal address the underlying conditions that led to Los Angeles's high urgency score and (2) what construction and other costs, if any, may be required to meet judiciary and related needs in Los Angeles? The Administrative Office of the U.S. Courts and GSA provided technical comments on this report. GSA's current proposal to construct a new courthouse in Los Angeles, while expanding the judiciary's use of the existing Roybal Federal Building, would address some but not all of the underlying conditions that led to Los Angeles's high urgency score. For example, it would address the judiciary's need for additional space and alleviate some security concerns. There would be space to accommodate the 47 current district and magistrate judges and the 14 additional judges expected by 2011, with room to expand, if needed, for additional judges. The new building would also improve security by providing additional holding cells and separate prisoner walkways and elevators. However, the operational and security concerns related to housing a trial court in multiple buildings (split court) that was a significant factor in Los Angeles's high urgency score would remain. For example, U.S. Marshals Service officials said that a split court would require them to duplicate much of their security equipment and personnel necessary for fulfilling its mission of protecting the courthouses. To meet judiciary and related needs in Los Angeles, the federal government will likely incur additional construction and operational costs beyond the estimated $400 million for the new courthouse. Like other courthouse projects in recent years, GSA officials acknowledge that there is a potential for the L.A. Courthouse to incur future escalation in construction costs due to changes during the design and construction phases, such as increases in raw material and labor costs. Furthermore, additional construction costs will also be incurred to meet the judiciary's space needs over the long term. Preliminary estimates by GSA show that these costs may exceed $100 million. For example, GSA will need to build four additional magistrate courtrooms in the Roybal building and renovate the current courthouse to convert courtrooms into office space for the U.S. Attorneys and other federal agencies. GSA also plans a long-term expansion project to construct seven more courtrooms to meet judiciary space needs by 2031. Judiciary officials also acknowledge that a split court would result in additional operational costs due to duplicate offices and staff in the Roybal building and the new courthouse. |
Federal funding for highways is provided to the states mostly through a series of formula grant programs collectively known as the federal-aid highway program. Periodically, Congress enacts multiyear legislation that authorizes the nation's surface transportation programs, including highways, transit, highway safety, research, and motor carrier programs. In 1998 Congress enacted TEA-21, which authorized $172.4 billion for the federal-aid highway program from fiscal years 1998 through 2003. The program expired on September 30, 2003, and it has been extended by six short-term extensions, the most recent extending the program until May 31, 2005. During the 108th Congress, both the House and Senate approved separate legislation to reauthorize the federal-aid highway program; however, the reauthorization legislation was not been enacted before the adjournment of the 108th Congress. The bill approved by the House authorized $226.3 billion for the federal-aid highway program for fiscal years 2004 through 2009, an increase of about 31 percent over TEA-21, while the bill approved by the Senate authorized $256.4 billion, an increase of about 49 percent. Because both bills contained funding increases, it is likely that the number of federal-aid highway projects will rise in the next several years. FHWA administers the federal-aid highway program and distributes most highway funds to the states through annual apportionments established by statutory formulas contained in law. Once FHWA apportions these funds, they are available to be obligated for construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes and for other purposes authorized in law. About 1 million of the nation's 4 million miles of roads are eligible for federal aid; including the 161,000 mile National Highway System, of which the 47,000 mile Interstate Highway System is a part. While FHWA administers the program, the responsibility for choosing projects generally rests with state departments of transportation and local planning organizations. The states have considerable discretion in selecting specific highway projects and in determining how to allocate available federal funds among the various projects they have selected. For example, section 145 of title 23 of the United States Code describes the federal-aid highway program as a federally-assisted state program and provides that the federal authorization of funds, as well as the availability of federal funds for expenditure, shall not infringe on the states’ sovereign right to determine the projects to be federally financed. A highway or bridge construction or repair project usually has four stages: (1) planning, (2) environmental review, (3) design and property acquisition, and (4) construction. FHWA reviews and approves long-term and short- term state transportation plans and programs, environmental documents, and the acquisition of property for all highway projects. However, its role in overseeing the design and construction of projects varies. On selected projects, FHWA exercises what is often considered “full” oversight, meaning that FHWA (1) prescribes design and construction standards, (2) approves design plans and estimates, (3) approves the selection of the contract award, (4) periodically inspects the progress of construction, and (5) renders final acceptance on projects when they are completed. However, relatively few projects are subject to this full FHWA oversight. The last two authorizations, the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) and TEA-21, devolved an increasing amount of responsibility to the states. Under current law FHWA exercises full oversight of certain high-cost interstate system projects, while states oversee design and construction on other federal-aid projects. The stages of a highway or bridge project and the corresponding state role and FHWA approval actions are shown in figure 1. The types of projects for which FHWA exercises full oversight as compared with state oversight are shown in table 1. According to FHWA, the agency retains the responsibility to oversee all federally-aided highway and bridge projects, including projects for which FHWA does not exercise oversight over the design and construction phases. FHWA conducts oversight of state transportation programs through a variety of means, including process reviews—reviews of state management processes to ensure that states have adequate controls to effectively manage federally-assisted projects. States and FHWA execute stewardship and oversight agreements to define their respective oversight responsibilities. TEA-21 contains an additional oversight requirement for so-called “major projects”—generally those estimated to cost at least $1 billion. Since TEA- 21 was enacted in 1998, states must submit finance plans to DOT annually for such projects, based on detailed estimates of the costs to complete the project and on reasonable assumptions about future increases in such costs. FHWA developed guidance that requires states to include in these finance plans a total cost estimate for the project, adjusted for inflation and annually updated; estimates about future cost increases; a schedule for completing the project; a description of construction financing sources and revenues; a cash flow analysis; and a discussion of other factors, such as how the project will affect the rest of the state’s highway program. FHWA approves these plans as a condition of federal aid. As of November 2004, 11 of the 21 current major projects had finance plans. Approved finance plans will be required for the other projects prior to FHWA authorizing federal funds for construction. FHWA forecasts that another 19 major projects, estimated to cost from $34 billion to $60 billion, will be starting over the next several years and will also require finance plans. Over the past several years, we and others have identified problems with FHWA’s oversight of major projects and other large highway and bridge projects. For example, in 1997, we reported that the overall amount of and reasons for cost increases on highway and bridge projects could not be determined because data were not readily available from FHWA or the states. We found, however, on many of the projects for which we could obtain information, that costs had increased, sometimes significantly, and that several factors accounted for the increases. In addition, initial cost estimates were not reliable predictors of a project’s total cost or financing needs because they were developed at the environmental review stage, and their purpose was to compare project alternatives, not to develop reliable cost estimates. We further reported that cost containment was not an explicit statutory or regulatory goal of FHWA’s oversight; therefore, the agency had done little to ensure that cost containment was an integral part of the states’ project management. In our May 2002 testimony before the Highways, Transit, and Pipelines Subcommittee of your Committee, we reported that FHWA had begun to improve its oversight by implementing Congress’ finance plan requirements for major projects and introducing risk-based decision making into its oversight of states’ processes on other projects. However, we also reported that FHWA had not yet developed performance goals or measurable outcomes linking its oversight activities to its business goals, and that goals and strategies for containing costs could improve accountability and make cost containment an integral part of how states manage projects over time. Furthermore, we stated that opportunities existed for improving the quality of cost estimating and developing reliable and accurate information on the extent and nature of projects’ cost performance to help direct federal oversight efforts. Our work identified several options for enhancing the oversight of major projects. Reports by DOT’s Office of Inspector General, as well as reviews by state audit and evaluation agencies, have also shown that the escalating costs and management of major projects continue to be a problem. For example, the Inspector General has issued several reports on FHWA’s oversight and stewardship of major projects, such as the Central Artery/Tunnel project in Massachusetts and the Woodrow Wilson Bridge in Virginia and Maryland. More recently, the Inspector General reported signs of improvement in FHWA’s stewardship over major projects but identified improvements needed in eight areas, including developing more reliable cost estimates, managing project schedules better, strengthening efforts to prevent and detect fraud, and refocusing FHWA’s efforts on project management and financial oversight. Partly in response to concerns that we and others have raised, in addition to the provisions Congress enacted in TEA-21, DOT also took further action. In 2000 the Secretary of Transportation established a task force to review oversight mechanisms and processes for major transportation projects across DOT. Among other things, the task force recommended that DOT improve the skills and qualifications of staff overseeing major projects and conduct more rigorous financial reviews of such projects. Although DOT did not formally implement the task force’s recommendations, FHWA responded to the task force report by establishing a major projects team in Washington, D.C., to assist FHWA’s division offices in reviewing financial plans and overseeing major projects and by assigning project oversight managers to each of the major projects. In addition, in 2003, DOT proposed new legislation as part of its TEA-21 reauthorization proposal requiring that (1) states submit a project management plan as well as an annual financial plan for any project with an estimated total cost of $1 billion or more or any other project at the discretion of the Secretary; (2) states develop financial plans for any project receiving over $100 million in federal funds; (3) FHWA perform annual reviews of state transportation programs’ financial management and periodic reviews of state project delivery systems for planning and managing projects; and (4) DOT develop minimum standards for estimating project costs and perform periodic reviews of state practices for estimating costs and awarding contracts. This proposal was largely adopted in bills that were separately approved by the House and the Senate in 2004 but that were not enacted before the adjournment of the 108th Congress. To meet the requirements of the Government Performance and Results Act of 1993 (GPRA), DOT establishes goals and outcome measures for the programs under its jurisdiction, including the federal-aid highway program, through its strategic and performance plans. GPRA requires agencies to complete strategic plans in which they define their missions, establish outcome-oriented goals, and identify the strategies that will be needed to achieve those goals. GPRA also requires agencies to prepare annual performance plans to articulate goals for the upcoming fiscal year that are aligned with their long-term strategic goals. The establishment of goals and measures is a valuable tool for guiding an agency’s strategies and resource allocations and for establishing accountability for the outcomes of its day- to-day activities. As our prior work has shown, measuring performance allows organizations to track the progress they are making toward their goals and gives managers crucial information on which to base their organizational and management decisions. When an agency’s day-to-day activities are linked to outcome measures, these measures can create powerful incentives to influence organizational and individual behavior. In prior work, we found that leading agencies that successfully link their activities and resources also seek to establish clear hierarchies of performance goals and measures. Under these hierarchies, an agency links the goals and outcome measures for each organizational level to successive levels and ultimately to the agency’s strategic goals. Without this link, managers and staff throughout the organization will lack straightforward roadmaps showing how their daily activities can contribute to attaining organization wide strategic goals. FHWA established measurable, outcome-oriented goals and measures related to cost and schedule performance for the first time in its 2004 performance plan, but FHWA has not effectively implemented these goals and measures in order to improve oversight. Specifically, FHWA has not linked its day-to-day oversight activities to its goals for major projects, and it has not yet used its goals and measures for nonmajor projects to examine the performance of states or particular projects. FHWA also uses estimates developed relatively late in a project’s development as its baseline for measuring its performance on achieving cost and schedule goals; thus, it does not task itself with controlling cost and schedule slippage during the early stages of a project’s development. In December 2000, DOT issued a task force report concluding that a significant effort was needed to improve the oversight of major projects and recommending that DOT incorporate goals for its oversight efforts into its performance plans as well as into the plans of FHWA. In 2002, we reported that FHWA had not yet developed performance goals or measurable outcomes linking its oversight activities to its business goals and that goals and strategies for containing costs could improve accountability and make cost containment an integral part of how states manage projects over time. FHWA has made some improvements over the past several years in developing goals and performance measures related to cost and schedule performance of federal-aid highway projects. In its fiscal year 2002 performance plan, FHWA included a strategic goal of organizational excellence that had among its many strategic objectives the aim to improve organizational performance. Since that time, from fiscal year 2003 to fiscal year 2005, FHWA’s performance plans have specifically identified under the organizational excellence heading a general oversight goal to improve project oversight and stewardship so as to realize more cost efficient federal-aid funds administration and project management and more effective use of funds in terms of return on investment. In its fiscal year 2004 performance plan, DOT for the first time established goals and outcome measures specifically related to achieving cost and schedule targets for its transportation projects. FHWA incorporated these goals and measures into its performance plan for highway projects, establishing, for the first time, goals and measures for major projects that are outcome oriented and measurable and clearly define containing project costs and schedules as an integral part of FHWA’s oversight mission. Figure 2 shows the goals and associated measures articulated in FHWA’s fiscal year 2004 performance plan. While linking day-to-day activities to goals and measures is an important element of implementing goals and measures by ensuring that they are being used as a framework to guide the activities, we found no evidence that FHWA has linked the day-to-day activities of its division offices to its goal and measure for major projects. In our visits to the three division offices that were overseeing a major project, we found a lack of documented goals, strategies, or measures showing how the division offices’ activities supported and furthered the goals and measures articulated in FHWA’s 2004 performance plan. While each division office had developed its own individual unit fiscal year 2004 performance plan, there was no link in these plans between the division offices’ activities and FHWA’s goal and measure for major projects: that is, to meet 95 percent of schedule milestones and cost estimates for major projects or to miss them by less than 10 percent. Furthermore, in these three division offices, the project oversight managers were not specifically tasked, as part of their duties and responsibilities, with implementing or furthering the articulated cost and schedule performance goals for major projects. This absence of a link between activities and goals and measures was in noticeable contrast to the link that the division offices had established between their activities and the three areas of work that FHWA has designated as its “vital few” priorities. FHWA’s vital few priorities, which consist of safety, congestion mitigation, and environmental stewardship and streamlining, are areas that FHWA has determined are key priorities and that it accordingly highlights in its performance plans as areas where the agency has identified performance gaps that must be addressed if FHWA is to be successful. Perhaps in line with this emphasis, FHWA has developed a better link between its division offices’ activities related to these vital few priorities and its goals related to these vital few priorities. For example, all seven of the division offices we visited had unit plans that linked their activities to all three of FHWA’s vital few priorities. This link was established through listing specific unit-level activities and measures that were designed to meet unit goals that mirrored the national performance plan’s goals for its vital few priorities. For example, for the vital few priority of safety, FHWA’s fiscal year 2004 performance plan set a performance goal of reducing highway fatalities to no more than 1.38 per 100 million vehicle miles traveled. The fiscal year 2004 performance plan for one division office tasked itself with five performance objectives to address this national goal, including such objectives as improving accident rates involving roadway departures, increasing the capability of FHWA and state engineers in highway safety design, and reducing pedestrian fatalities. One or more division-level performance measures and several specific activities were identified for each of these five division objectives, and performance expectations set for key division staff identified which of these activities they were responsible for performing. In addition to not linking its activities to its goal for major projects, FHWA has also not yet used its goals and outcome measures to help it identify and correct problems on the vast majority of projects that are not considered major projects. In 2004, FHWA did not develop numerical goals or outcome measures related to nonmajor projects, nor did it assess the cost and schedule performance of projects on a state-by-state or project-by-project basis in order to gain a clear picture of whether certain states or projects have more cost or schedule overruns than others in order to target its oversight activities. Instead, FHWA officials told us that while FHWA’s major projects team recently started developing this state-by-state information, FHWA relies on the division offices to monitor costs of individual contracts and take action as appropriate. However, these officials could not say with certainty whether their division offices were carrying out this monitoring function, or what kinds of corrective measures were being applied. FHWA officials also said that the agency relies on FHWA’s division offices to execute formal oversight agreements with the states to ensure that they are working to control costs. However, none of the oversight agreements of the seven division offices we visited reflected an agreement between FHWA and the states to do this. As we concluded our review, FHWA officials stated that in response to issues we raised, FHWA would begin sharing information with its division offices and begin discussing appropriate solutions or actions the divisions can take to address incidences of cost growth. For fiscal year 2005, FHWA made its cost-related goal for nonmajor projects more specific by adding the outcome measure that the total percentage of cost growth for all construction projects over $10 million will be less than 10 percent above the estimated cost when the project went to construction. FHWA’s preliminary information indicates that the agency is, in the aggregate, meeting its goal; however, sharing information with its division offices about variations in state contract costs could help FHWA target its oversight efforts. For example, FHWA's information also shows that about 1 in 5 of the 492 contracts approved for construction in fiscal year 2003 exceeded the 10 percent threshold in fiscal year 2004. One contract exceeded the threshold by 160 percent. Our analysis of FHWA’s information also shows that some states may be more effectively controlling the costs of federal-aid highway contracts than others. For example, in one state, 6 of 9 contacts over $10 million had exceeded the threshold, while in another state, all of the contracts were under the threshold. While opportunities exist for FHWA to use this information to better target its oversight efforts, it faces challenges in doing so in light of weaknesses recently reported by the DOT Inspector General’s Office in its financial management and reporting processes. FHWA uses cost and schedule estimates developed relatively late in a project’s development—at the point at which the project is ready to go to construction—as a baseline for measuring its performance. We have discussed our concerns with FHWA’s use of later estimates as its baseline measure in earlier work. We have recognized that developing early estimates is difficult; however, we have pointed out that using this late estimate as a baseline for measuring cost growth provides a misleading picture of actual cost growth. This is because cost estimates developed much earlier in the project—for example, at the environmental review stage—are used to make the public investment decision regarding the project. By the time the project goes to construction, a public investment decision effectively has been made, as substantial funds will have been spent on designing the project and acquiring property, commitments will have already been made to the public, and much of the increases in a project’s costs may have already occurred. Moreover, by measuring its performance only after construction begins, FHWA is not tasking itself with or establishing any accountability for controlling cost growth during the part of the process where it exercises direct oversight responsibility. Rather, it has focused its goals on the phases of the project where it exercises less oversight. This is because while FHWA is responsible for reviewing and approving certain state transportation plans, environmental impact assessments, and the acquisition of property for all projects, its role in approving the design and construction of projects varies. FHWA and its major projects team undertook a number of activities to improve its oversight efforts, which the major projects team documented in its workplan summary (see app. II). Activities undertaken in response to prior concerns included increasing the use of project oversight managers, issuing guidance to states for improving cost estimates throughout the life of projects, developing some information on cost growth of major and other large projects, incorporating more risk assessments into its reviews of state management processes, and attempting to address congressional- committee direction to develop a multidisciplinary approach to its oversight. FHWA’s activities in these areas have promising elements and limitations. FHWA has taken some positive steps in its use of project oversight managers for major projects, but it has not yet defined the role of project oversight managers or established agency wide performance expectations for them. Currently, FHWA has assigned project oversight managers to 14 of the 21 active major projects, compared with 7 project oversight managers and 14 major projects in 2002. An FHWA official said that 6 project oversight manager positions would be advertised soon for the other projects and would be filled within 6 months. In August 2002, it issued a core competency framework to identify the technical, professional, and business skills that project oversight managers should possess and to serve as a guide for selecting and developing these managers. This core competency framework defines the skills and supporting behaviors of project oversight managers in areas such as project and financial management, contract administration, and program laws, and it specifies the desired proficiency level for each competency at each grade level. FHWA has also taken steps to provide guidance and tools for project oversight managers, including an online resource manual and other guidance on reviewing project management plans and finance plans. It also made major projects team staff available to assist the project oversight managers in completing their reviews of such plans, and it sponsored annual meetings for project oversight managers to share experiences. Additionally, FHWA identified external training opportunities to help managers reach or improve their core competency skills. FHWA sent a listing of these opportunities to project oversight managers via email and invited these staff to enlist in courses that interested them. For the future, FHWA’s 2004 major projects team work plan summary envisions a variety of additional activities to improve the effectiveness of project oversight managers, including working with universities and training vendors to establish a skill set development and certification program to ensure that all project oversight managers acquire the same critical skills and to establish a career path for them. According to FHWA, having a career path would make the position of project oversight manager a more attractive career option because it would provide opportunities to work with more challenging projects and provide promotion opportunities so that managers could advance within FHWA while staying in the project management track. However, there are limitations with FHWA’s efforts so far. While the core competencies define the skills that project oversight managers are expected to possess, they do not define what the managers should do to oversee a major project. FHWA has not yet articulated the role of project oversight managers or established agency wide performance expectations for them. In prior work, we established that setting performance expectations that are linked to goals is important, as a specific alignment between performance expectations and organizational goals helps individuals see the connection between their daily activities and organizational goals. According to FHWA officials, project oversight managers are assigned to the division offices, and each division office defines what its project oversight manager does. At the three division offices we visited that had major projects and project oversight managers, none had set performance expectations for the project oversight manager that specifically tasked the project oversight manager with achieving the goals and outcome measures for the major projects. Project oversight managers and division officials stressed the project oversight managers’ close, hands-on involvement with the state transportation agencies in the project, on an almost daily basis. For example, project oversight managers and other division office staff help state transportation agencies prepare finance and project management plans, get involved in design, participate in community outreach, and brief local political leaders on major projects. However, the extent to which the activity of the project oversight managers supported DOT’s cost and schedule goals was not clear. Finally, without clear roles, responsibilities, and performance expectations for project oversight managers that are clearly linked to FHWA’s goals, it is unclear what training is most needed to enable project oversight managers to improve their performance and meet the agency’s goals. Our guidance for assessing training efforts cites the need for training efforts to be an integral part of the strategic and performance planning process and to focus on reaching the agency’s goals, rather than being implemented ad hoc.Currently, the training opportunities FHWA offers to project oversight managers are identified by the major projects team and are voluntary. There is no program of required courses—staff can choose which courses they would like to take, or take no courses at all. In March 2004, the head of FHWA’s major projects team sent an e-mail to the oversight managers advising them of available training. To date, three project oversight managers and one other division office engineer have each volunteered to take one or two courses. FHWA officials told us they eventually plan to establish a certification program for project oversight managers and to introduce a project oversight manager skills-set or career path to make project management a more attractive career option by setting out opportunities for more challenging projects, and providing promotion opportunities. However, as of December 2004, FHWA does not have a time frame for implementing its plans, and officials told us these activities would not be implemented without additional resources. In another positive step since 2002, FHWA has provided guidance to state transportation agencies to assist them in applying sound cost estimating practices, including guidance in developing more realistic early cost estimates. However, this guidance is voluntary and covers only major projects, and we found evidence that there is some resistance by FHWA officials to focusing on developing earlier cost estimates. In past work, we have identified problems related to FHWA’s lack of accurate cost estimates for projects. For example, in 1997, we found that cost increases occurred on projects, in part, because the initial cost estimates were not reliable predictors of the total costs or financing needs. Rather, these estimates were developed for the environmental review—the purpose of which is to compare project alternatives, not to develop reliable cost estimates. In addition, each state used its own methods and included different types of costs in developing its estimates, since FHWA had no standard requirements for preparing cost estimates. Since that time, in 2003, FHWA surveyed its division offices on cost estimating practices in their states and found a variety of approaches to developing cost estimates, including manually compiling estimates from historical data, using estimated quantity or cost per mile calculations, or utilizing various externally or internally developed software; one state reportedly lacked any formal process. Similarly, the American Association of State Highway and Transportation Officials (AASHTO) reported widely varying practices among the states in developing cost estimates. In June 2004, FHWA issued guidance that articulated the importance of developing realistic early cost estimates that would be more stable as a project progresses. Specifically, FHWA’s guidance stated that it is important that care be taken to present an achievable estimate even in the early stages of project development, because logical and reasonable cost estimates are necessary to maintain public confidence and trust throughout the life of a major project. Moreover, the guidance recognized that cost increases over and above the early planning and environmental estimates for major transportation projects have become of increasing concern to congressional and political leaders, federal and state top managers, and auditing agencies. In addition to recognizing the difficulty of developing more accurate cost estimates early in the project, this guidance includes such components as what should be included in an estimate, how it should be approved, factors to include in contingencies, and other information. This guidance may help states move towards more consistent and reliable cost estimates during the earlier planning phases when decisions are being made about whether or not to go forward with the project, as well as the project’s potential design and construction. FHWA also established help teams that travel to states that ask for assistance in creating better estimates. For example, in March 2003 FHWA was asked by the Kentucky and Indiana transportation departments for help in reviewing the accuracy and reasonableness of the initial cost estimate to complete the Ohio River Bridges project. This project includes two new bridges over the Ohio River that would link eastern Louisville, Kentucky, and Clark County, Indiana, with additional interchange improvements. FHWA staff helped state officials identify the need for revised cost estimates and more realistic completion dates based on such factors as more realistic right-of-way costs, needed environmental mitigation, revised contingencies, and updated inflation rates. A team of federal and state staff working with consultants recommended that the total cost estimate of the project be revised from $1.6 billion to $2.5 billion and that its expected completion date be revised from 2017 to 2020. State officials accepted these recommendations. While these cost estimating guidance and assistance efforts represent a positive step, it is too early to tell whether they will actually improve cost estimating efforts in most states. Furthermore, there are indications that there is some resistance among FHWA officials and states to emphasizing the importance of more accurate early estimates in practice. For example, some FHWA officials with whom we spoke said that costs cannot be accurately estimated early because issues such as public opposition to a project or unforeseen environmental mitigation procedures that are determined necessary are likely to drive up the cost of a project. They said early estimates should not be used as a basis for monitoring project costs. Other FHWA officials believed that the estimate developed at the conclusion of the design phase, as the project is ready for construction, is the only realistic estimate to be used as a baseline. Some FHWA officials told us that resolving concerns about cost estimates is more a matter of managing public expectations, so that the public understands that early estimates are not reliable and cannot be counted on, and that the actual cost will exceed early estimates. AASHTO also believes that accurately estimating costs at the early stages of a project can be a challenge. According to a May 2004 AASHTO report, property acquisition needs and environmental and regulatory requirements may not be fully known early on, becoming clear only as the project progresses. Public input can contribute to additional features being added to projects, known as “scope creep,” and litigation can delay a project, adding to costs because of inflation. We recognize that many challenges exist to developing more realistic early estimates that more accurately reflect the expected cost of a project. However, as we have also reported, relying on estimates prepared as a project is ready to move to construction is too late in the process, as substantial funds may have already been spent on designing the project and acquiring property, and a public investment decision may, in effect, already have been made. FHWA’s guidance recognizes that steps can be taken to take uncertainties into account when developing early cost estimates through such means as developing contingencies. Some states have begun taking action to improve the reliability of early cost estimates. For example, Washington State’s Cost Estimate Validation Process uses project teams to identify risk factors, along with costs and mitigation strategies for each factor. These results are then entered into a computer-based modeling program that produces a range and a project cost estimate at the 90 percent confidence level, rather than a single dollar cost estimate. DOT’s proposed legislation for the reauthorization of TEA-21 in 2003 included provisions empowering the Secretary to develop minimum standards for estimating project costs and to perform periodic reviews of state practices for estimating project costs. These provisions were adopted in bills that were separately approved by the House and the Senate in 2004 but that were not enacted before the adjournment of the 108th Congress. According to FHWA officials, if these provisions are adopted, the provisions may require them to move beyond voluntary guidance and issue regulations covering states’ practices for estimating costs. FHWA has started to collect some cost information on some projects, but it still lacks the capability to determine the extent of and reasons for cost growth on projects so that it can better focus its oversight efforts. In 1997 we reported that cost growth occurred on projects, but the extent could not be determined because FHWA’s information system for highway projects could not track total costs over the life of a project. In 2002, we testified that this information was still not available and noted that recent congressional attempts to gather complete and accurate information about the extent of and the reasons for cost growth had met with limited success. In response to these concerns and requests from Congress for data, FHWA has begun to collect project cost data, but it has not substantially improved its ability to monitor total costs on projects. FHWA has undertaken two efforts to collect information on the cost performance of federally financed projects. First, it has started tracking information on cost growth of major projects. The small number of these projects allows the tracking to be done manually on a table containing cost and schedule information for key aspects of each major project. Second, FHWA has developed aggregated cost information on construction contracts over $10 million on a state-by-state basis. FHWA has done this by comparing the current estimated costs of all contracts over $10 million in each state with the engineering estimate developed before the contract was awarded. However, as mentioned earlier, the state-by-state information FHWA has developed has not yet been used to measure performance or target its oversight efforts. In spite of this progress, FHWA still does not have the capability to measure the extent of and reasons for cost growth on projects. FHWA’s principal vehicle for tracking project costs is its financial management system. This system is an accounting system, not a project information system, and it tracks federal reimbursements by contract rather than by project. Because one project can include many contracts over many years, and the system does not automatically link contracts to projects, FHWA has little easily accessible information to help it determine the total overall costs of each project, other than the major projects it tracks individually outside of its financial management system. In one case, FHWA division staff told us that because FHWA’s financial management system does not track costs by project, the division developed its own spreadsheet to track project costs. Our recent work confirmed FHWA’s continued difficulty with tracking cost growth on projects. We randomly selected 14 contracts from 7 division offices and asked FHWA’s division offices to identify the project related to each contract. We then requested consolidated cost information on the 14 projects. FHWA took an average of more than 3 months—and up to 6 months—to provide us this information for 12 of the 14 projects, and it was unable to provide us complete cost information on the other 2 projects. (See app. I for more details.) The primary reason for FHWA’s difficulty in providing us with this information was that FHWA and state staff could not easily or electronically compile information on a project-by-project basis. For example, one division office said it had to develop and run special transaction reports and manually extract the information we wanted because the support files for the information were at different locations, including a state district office, state transportation agency offices, and comptroller offices. Another told us it had to take the extra step of either combining or separating contracts in order to compile information by project, which resulted in more “hand work.” Another said that files on contracts for one project were kept in different locations depending on the stage of the project that the contract was related to. As a result, quite a bit of staff time was tied up as they attempted to get information from multiple departments of the state transportation agency. FHWA’s continued difficulties in maintaining accurate and complete data to determine the extent of cost growth on projects limit its ability to evaluate why cost growth occurs, identify problems and solutions, target its oversight efforts, and transfer lessons learned. FHWA expects its division offices to use some form of risk assessment to help guide its reviews of state management processes, also known as process reviews. However, risk assessments are not always being used consistently or effectively. As we reported in 2002, FHWA issued a policy in June 2001 encouraging its division offices to prioritize the risks in the transportation programs in their states and to direct their oversight efforts based on these results. The policy did not require a specific risk assessment approach but allowed division offices flexibility in developing an approach with their state agencies. FHWA considered its establishment of risk assessment practices at the division offices to be the first of a two-phased approach that would lead to an overall risk management program for FHWA, which was still under consideration within FHWA’s leadership as of November 2004. Each of the seven division offices we visited had developed a risk assessment approach, and five out of seven of the offices were using these risk assessments to guide their process reviews. However, at two division offices, the results had not been used to direct their process reviews. Staff at one division office we visited reported that although they had been doing risk assessments for a few years, they did not use the results to target state activities for review. Instead, they targeted state activities for review by meeting with state officials to draw up an intuitive list of state operations for process reviews. Similarly, another division office had drafted a risk assessment approach, but it had not yet tried to use it. Division office staff were skeptical that it would yield better results than their own more intuitive approach to identifying which state program operations warranted a process review. In addition, in November 2004 the DOT Inspector General reported that FHWA's risk assessments were voluntary and did not provide a systematic approach for assessing program risks throughout the agency. The Office of Inspector General (OIG) reported that risks assessments varied significantly in the scope and methodology used and how the assessment results were rated and classified. As a result, some major programs were not reviewed, and risk assessment results were not reliable or comparable across states. To improve FHWA's process for managing risk, the OIG recommended that FHWA require all division offices to conduct risk assessments and that it issue guidance to division offices to ensure risk assessments are conducted more strategically and with a disciplined methodology. The OIG further recommended that FHWA analyze trends within individual risk assessments to identify agency wide issues and problems and establish a systematic follow-up process to ensure that oversight attention is given to high-risk areas. FHWA was in the process of reviewing and responding to the OIG’s recommendations when we concluded our review. In February 2003, in the Conference Committee Report for the DOT fiscal year 2003 continuing appropriations, the conferees expressed continuing concern about FHWA’s management of major projects, and in particular, a concern that FHWA’s traditional engineering focus had inhibited oversight in such areas as financing, cost control, and schedule performance. Accordingly, FHWA was directed to evaluate the range of disciplines and skills within its staff and to develop a strategy for achieving a more multidisciplinary approach towards its oversight activities, including identifying staff with such skills as financing and cost estimation. However, FHWA’s human capital plan does not incorporate strategies for developing a workforce to support a more multidisciplinary oversight approach. In prior work, we noted that the process of strategic workforce planning addresses two critical needs: (1) aligning an organization’s human capital program with its current mission and programmatic goals; and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. To some extent, FHWA’s human capital plan does this for the agency’s current vital few priorities of safety, congestion mitigation, and environmental stewardship. But the agency’s oversight mission is not truly incorporated into the plan. FHWA’s human capital plan acknowledges the congressional-committee direction FHWA received to develop a more multidisciplinary approach to oversight. The plan states that this approach will require the development or acquisition of new skills, specifically in the areas of financing, funds accountability, project-level cost control, schedule performance, process management, and transportation planning. However, FHWA’s human capital plan does not relate these needed skills to the skills possessed by its present workforce, nor does it address how these skills will be developed or acquired. Instead, FHWA’s human capital plan is essentially a plan for replacing individuals in its current key occupations whom it expects to lose through attrition over a 5-year period. Additionally, strategies for developing a multidisciplinary approach were not reflected in FHWA’s guidance to its division offices for developing their workforce plans. This year, FHWA required its division offices and other units to prepare a workforce plan for the upcoming 3-to-5 year period identifying anticipated skill gaps in their workforce. However, the guidance FHWA provided did not mention the multidisciplinary skills that FHWA had identified in its human capital plan. As we have pointed out in prior work, when planning for the future, leading organizations go beyond simply replacing individuals and engage in broad, integrated planning and management efforts that focus on strengthening both current and future organizational capacity. This is particularly important for FHWA, as its traditional engineering focus has drawn congressional committee concern that has led to direction to develop a multidisciplinary approach towards its oversight activities. Similarly, FHWA’s recruiting efforts do not incorporate strategies for developing a more multidisciplinary approach to project oversight. Like its human capital plan, FHWA’s recruitment plan for 2003 through 2005 is primarily a plan for hiring to fill the agency’s traditional occupations. The recruitment plan does not set any specific goals or objectives for acquiring needed multidisciplinary skills that FHWA articulated in its human capital plan, such as project level cost control, schedule performance, process management, and transportation planning. Under the recruiting plan, the development of a multidisciplinary approach is addressed through FHWA’s professional development program (PDP). FHWA’s PDP, which historically focused on engineers, is a 2-year program that provides developmental assignments and on-the-job and classroom training for entry-level staff. Officials told us that PDP staff are now being given assignments allowing them to develop a broader range of skills at the start of their careers, including assignments to division offices with major projects. They also note that over recent years FHWA has been hiring fewer engineers for its PDP programs and more staff from other backgrounds. The other principal component of FHWA’s response to congressional committee direction to develop a multidisciplinary approach to project oversight is training, but the agency has made limited progress in developing new courses to bring new skills to its workforce. Only two new training courses were being developed specifically to address needed skills—a course on project cost estimation and a course on project management for managers in division offices. As of November 2004, both courses were being pilot tested. As we noted earlier, it is important for training to be an integral part of an agency’s performance planning process to ensure that it contributes to reaching agency goals. However, in its fiscal year 2005 performance plan, FHWA allows divisions the discretion to decide whether or not to participate in multidisciplinary training for its project oversight managers and professional development program staff. In addition, as noted earlier, FHWA has identified and offered external training courses to project oversight managers, but to date only a few managers and other key division staff have expressed an interest. Even so, FHWA human resources officials we spoke to told us they believed that the congressional committee’s direction to develop a multidisciplinary approach to project oversight has been largely met through their already existing training efforts. These efforts include making courses available on risk assessment techniques, conducting process reviews, and implementing financial management improvements. In addition to FHWA’s limited progress in developing strategies for meeting this congressional-committee direction, FHWA has not fully embraced the need to develop a more multidisciplinary approach to oversight. FHWA human resources officials we spoke to believed the concern that FHWA’s workforce is centered on engineering at the expense of other project oversight skills is based on a misperception—that is, not recognizing that FHWA engineers take on many other tasks not strictly related to engineering. Furthermore, two division office officials we spoke to in the course of our work questioned the need for FHWA to focus on multidisciplinary skills. One division administrator commented that “multidisciplinary” means that a person can do many things, and therefore that division’s staff was already multidisciplinary. The deputy administrator in another division questioned what was meant by multidisciplinary skills, believing there was no guidance from headquarters on this. FHWA’s efforts to improve oversight face several challenges. These challenges stem from the structure of the federal-aid highway program and the culture of partnership that has resulted between FHWA and the states. These challenges also stem from FHWA’s decentralized organization, human capital challenges that mirror those faced throughout government, and FHWA’s perception that it has received conflicting signals on its oversight role over the years. Because these challenges are in large part rooted in FHWA’s organization and culture, and in the structure of the program it administers, they may be difficult to surmount. Because the federal-aid highway program is a state-administered, federally assisted program, it provides states broad flexibility in deciding how to use their funds, which projects to pick, and how to implement them. Furthermore, states are exempt from FHWA oversight on design and construction of many projects. Although DOT has articulated goals and outcome measures for the federal-aid highway program, such as improving safety and reducing the growth of traffic congestion, FHWA must implement and achieve these goals through a program over which it exercises limited control. Our past work across government programs has shown that in programs that have limited federal control, agencies face challenges to ensure that federal funds are efficiently and effectively used. We have also found that these challenges can be successfully overcome, in some cases, by ensuring that the program has clear goals and strong analytical data to measure program results. However, as stated earlier, FHWA’s efforts both to implement its goals and to collect and analyze data on project costs have fallen short. Exacerbating this challenge is the fact that, as our August 2004 report highlighted, the federal-aid highway program does not have the mechanisms to link funding levels with the accomplishment of goals and outcome measures that DOT has articulated. We have also reported that although a variety of tools are available to help measure the potential performance outcomes, such as those that measure the costs and benefits of transportation projects, such potential outcomes often do not drive investment decisions, as many political and other factors influence project selections. For example, the law in one state requires that most highway funds, including federal funds, be distributed equally across all of the state's congressional districts. Consequently, the structure of the federal- aid highway program provides no way to measure how funding provided to the states is being used to accomplish particular outcomes, such as reducing congestion or improving safety, and little assurance that projects most likely to accomplish goals and outcome measures articulated by DOT will be funded. The absence of such a link may make it more difficult for FHWA to define its role, the purpose of its oversight, and what its oversight is designed to accomplish. In August 2004, we reported that policy makers may wish to consider realigning the federal-aid highway program's design, structure, and funding formulas to take into account the program’s goals and to include greater performance and outcome oriented features. We also said that such consideration could include the appropriate roles of the federal and state governments, including what type of administrative structure for overseeing the federal-aid highway program would best ensure that the performance goals are measured and accomplished. Our report provided Congress with a matter for congressional consideration and said that the proposed National Commission to assess future revenue sources to support the Highway Trust Fund might be an appropriate vehicle through which to examine these options. Consistent with the structure of a state administered, federally assisted program, FHWA has developed a culture of partnership with the states. This culture of partnership dates back to the Federal-Aid Road Act of 1916, when the program was funded through a 50 percent federal and 50 percent state matching share. This partnership approach recognizes that states select, plan, and build projects, while FHWA ensures that federal laws and other requirements are followed by maintaining a close, hands-on involvement with state transportation agencies in delivering projects. FHWA and state officials believe that over the years this partnership has helped to build trust and respect between state transportation agencies and FHWA and ensure that priorities such as safety and the environment are addressed, and has resulted in projects being built more economically and efficiently. However, there is a potential down side to this partnership approach. When a project overseer becomes an active partner in a project, an arms-length, independent perspective can be lost. In fact, FHWA’s partnership approach to project oversight has failed in the past. FHWA had an oversight manager on the Central Artery/Tunnel Project in Boston, Massachusetts, a project that experienced widely-reported cost increases, growing from around $2.3 billion in the mid-1980s to almost $15 billion by 2004. In March 2000, an FHWA task force charged with reviewing FHWA’s oversight of the project found that FHWA had been caught unaware earlier that year when the state revealed an estimated $1.4 billion cost increase. The task force attributed this to FHWA’s over reliance on trust between itself and the state, reporting that FHWA’s partnership approach failed to achieve independent and critical oversight of the project. FHWA officials acknowledged that independence is critical to effective oversight and also acknowledged the need to closely monitor the performance and independence of their project oversight managers on an ongoing basis. However balancing the role of overseer and partner can be difficult. In one state we visited, the division’s oversight manager for a major project had business cards that identified him as a member of the state’s project team—with the project’s logo, Website, and e-mail address printed on the card—rather than as a federal employee. Only his position title on the card, “FHWA Project Administrator,” identified him as an FHWA employee, rather than as a state employee. Ensuring that FHWA oversight personnel maintain an independent perspective is especially critical given the current lack of linkage between FHWA’s performance goals and the roles and expectations of its project managers. Another potential challenge presented by FHWA’s culture of partnership with the states is that it may have prevented FHWA from considering other models for project oversight—including some models in use within DOT. For example, the Federal Transit Administration (FTA) uses competitively selected engineering firms as oversight contractors to monitor major mass transit projects costing over $100 million. During the project’s design, the contractor reviews the grantee’s plan for managing the project and determines whether the grantee has the technical capability to complete the project. Once FTA approves the plan, the contractor monitors the project to determine whether it is progressing on time, within budget, and according to plan. In prior work, we noted that FTA’s project management oversight program benefited both the agency and the grantees carrying out the projects. As another example, DOT established a Joint Program Office to help carry out the Transportation Infrastructure Finance and Innovation Act Program, which provides credit assistance to states and other project sponsors for surface transportation projects. This office reviews and evaluates proposed projects for participation in the program, reviews financial plans and progress reports during project construction, monitors the project sponsor’s credit, and coordinates site visits and other oversight activities with DOT field offices. FHWA administers the federal-aid highway program through a decentralized division office structure and delegates much of FHWA’s decisionmaking and program implementation to those offices. Therefore, FHWA’s division administrators enjoy wide latitude to implement their programs. FHWA has had a field office in every state since 1944, and, according to FHWA and state officials, this arrangement gives maximum flexibility to the people closest to the customer and to the issues to make decisions best suited to particular needs and situations. According to FHWA officials, this decentralization of decisionmaking and program implementation to the division offices increased after 1998 and the passage of TEA-21, which eliminated FHWA’s nine regional offices. While this flexibility may have benefits, decentralization presents challenges for the implementation of a consistent national leadership vision and strategies. These long-standing organizational arrangements may have contributed to such conditions as the lack of uniform performance expectations for project oversight managers, widely varying methods used to develop cost estimates for projects, and different approaches to doing risk assessments. Some limitations are by design. For example, while FHWA’s fiscal year 2005 performance plan discusses multidisciplinary skill training for its oversight managers and professional development program staff, it also specifically grants division administrators the discretion about whether to participate. FHWA officials acknowledged the challenges of consistently implementing national level goals and programs among the many division offices. Our 2003 update to our High-Risk Series of reports recognizes that strategic management of human capital continues to be a high-risk area government wide. Although considerable progress has been made since we first designated human capital a government wide high-risk area in 2001, federal human capital strategies are not yet appropriately constituted to drive the transformation that is needed across the federal government. Among the challenges agencies face are the need to improve their ability to acquire, develop, and retain talent, and the need to better and more fully integrate these and other human capital efforts with agencies’ missions and program goals. For FHWA, this government wide challenge manifests itself in a number of ways, including the need to transform its workforce and culture to meet its evolving mission. FHWA’s workforce partnered with the states to build the Interstate Highway System from 1956 into the 1990s. FHWA needed engineering skills to perform tasks, such as detailed reviews of design plans and inspections of construction progress to ensure that national uniformity in terms of design and safety was established throughout the interstate system. These skills were especially important because, according to FHWA and state officials, state transportation agencies did not have the equivalent capability to do the job at that time. In recent years Congress has recognized the increased capacity of state transportation agencies and increasingly delegated approval authorities to the states, including the authority over design and construction decisions for most projects. As a consequence, FHWA’s oversight role and mission have evolved to include, for example, greater reliance on broad reviews of state management processes. As FHWA’s oversight role and mission evolves, FHWA faces the challenge of transforming its workforce and culture to evolve with this role and mission. In our discussions with FHWA field staff, we noted reluctance among some FHWA staff to focus on these broader reviews that FHWA increasingly relies on because they see these as less important than the traditional tasks of reviewing design plans and inspecting the progress of construction. Division office officials in two states we visited told us that change has been an issue for its more tenured staff. For example, the Administrator at one office had begun to hire staff with a variety of skills, while officials at the other office saw a need for more specialists, including staff with financial expertise. Officials also said some staff have resisted doing process reviews because they see it as functioning as auditors rather than as partners with the state in delivering projects, which is how they prefer to be seen. Overcoming these challenges will become even more important in the years ahead should proposed legislation increasing FHWA’s oversight responsibilities be enacted. In 2001, a FHWA task force concluded that changes in the agency’ s oversight role mandated by highway program authorizations enacted in 1991 and 1998 had resulted in internal confusion and wide variation in interpretations by FHWA personnel covering the agency’s roles and responsibilities in overseeing projects. In 2002, we reported that FHWA could not say whether it had resolved the internal confusion and variations in interpretations of the agency’s oversight role identified by the task force. During our review we found that some confusion continues, as some of the FHWA personnel we spoke to expressed the view that Congress has sent mixed messages about the extent to which it would like to see FHWA oversee projects. According to some division and headquarters FHWA officials, federal laws over the years have required FHWA to withdraw from direct oversight of most projects, while at the same time, legislation has increased the oversight requirements for major projects, resulting in mixed signals. Changes that were proposed by DOT and passed by the House and the Senate in 2004 but not enacted before the adjournment of the 108th Congress could, if reintroduced and enacted by the 109th Congress, help clarify FHWA staff’s perception of their oversight role by, for example, mandating reviews of state financial system, developing cost estimating standards, and cascading requirements for major projects to other projects. Enactment of these provisions would also provide Congress the opportunity to provide a more detailed explanation of and purposes for these provisions regarding FHWA’s role versus the states’ role in overseeing cost and schedule performance of federal-aid highway projects in the legislative history accompanying the reauthorization bill. As we stated in our 2002 testimony, such clarification would be helpful. Reports and analyses published by us, OMB, and the National Research Council suggest a set of best practices that agencies can benefit from in conducting effective oversight of large infrastructure projects such as those in the federal-aid highway program overseen by FHWA. While these reports and analyses tend to focus more on overall project management, there are elements in each of them that relate specifically to improving project oversight. From our review of these reports and analyses, we identified four best practices that are particularly applicable to FHWA’s oversight efforts and that FHWA officials and decision makers can consider to help effectively oversee large infrastructure projects and states’ financial and management processes. While some of these best practices are beginning to be reflected in FHWA’s activities, as a whole, they could provide a framework for moving to a comprehensive approach to project oversight. These best practices are 1) establishing measurable project oversight goals and communicating these goals down through all levels of the agency, 2) establishing project oversight manager role and accountability based on oversight goals, 3) providing professional training and a career path, and 4) learning lessons and transferring them. As we discussed earlier, agencies seeking to make oversight a priority should establish measurable project oversight goals that help it carry out its mission and define what its oversight is designed to accomplish—and should communicate these goals down through all levels of the agency. Having measurable goals gives managers the means to objectively and quantifiably assess progress toward achieving certain outcomes. If an agency relies only on general goals to guide its efforts, the agency will not have any way of determining whether it achieves those goals since it has not first identified a way to quantify or measure the outcome. Once these goals are established, agencies should communicate these goals down to all levels of the agency. One way to ensure that the goals are communicated effectively is to link the agency’s day-to-day activities to these goals. Our 1998 report on leading practices in capital decision-making added that clear communication of an organization’s vision and goals is a prerequisite for success. Top-level officials develop the organization’s priorities and communicate them downward to subunits within the organization. Based on these goals, managers at all levels work to produce plans and activities that outline their individual strategies for achieving top-level goals. Once an agency establishes its oversight goals, it should incorporate those goals into its strategies and activities by making oversight managers accountable for the effective implementation of the goals. We recently recommended that Amtrak adopt policies and procedures for managing infrastructure projects that, among other things, include mechanisms to ensure accountability for a project’s success. We stated that such mechanisms should clearly indicate the individuals responsible for implementing the project, the expectations for their performance, the ways their performance will be measured, and the potential consequences for failing to meet expectations. In this report, we noted that some of the railroads we had contacted tied pay and personnel decisions to performance, holding project managers directly responsible for the project’s success and failure. In other previous work, we have also noted that how such pay for performance efforts are done, when they are done, and the basis on which they are done can make all the difference in whether such efforts are successful. In addition, in other prior work, in 2000, we found a number of emerging benefits from the use of results- oriented performance agreements for executives, including, among other things, providing results-oriented performance information to serve as the basis for executive performance evaluations. Professional training enables oversight staff to understand their expected roles in achieving the agency’s oversight goals. Having a view of a future career is also desirable for the development of oversight staff. In 1999 the National Research Council reported that the Department of Energy could improve its project performance by developing skills, training opportunities, and a career path in project management. The report added that the agency needed to establish criteria and standards for selecting and assigning project managers, including documentation of training, and should require that all project managers be trained and certified. In prior work, we have found that an agency’s training program should be linked to achieving the agency’s strategic goals, while specific training for each individual should be based on his or her developmental needs. Effective oversight also requires a proactive approach to establishing evaluation mechanisms, collecting information, and transfering lessons learned on an ongoing basis. Learning from past successes and mistakes and sharing that information with decision makers, agency officials, and project managers is a critical element for effective oversight. Our 1996 executive guide to help agencies implement GPRA reported that agencies analyzing the gap between where they are and where they need to be to achieve desired outcomes can target those processes that are in most need of improvement, set realistic improvement goals, and select an appropriate process improvement technique such as benchmarking. Benchmarking compares an internal agency process with those of private and public organizations that are thought to be the best in their fields. In addition, our 1998 report on leading practices in capital decision making also found that agencies could evaluate and compare results with goals by using financial and non-financial criteria that link its overall goals and objectives. In 2000, we reported that agencies conducting program evaluations improved their measurement of program performance or understanding of performance and how it might be improved. In addition, our Executive Guide on Capital Decision-Making identified practices federal agencies can implement to enhance their evaluation processes. In 1997, OMB stated in its Capital Programming Guide that agencies should be able to document and support the accomplishment of the respective agency goals. Agencies can also evaluate the planning and procurement process to determine whether a project accurately predicted the desired benefits 3 to 12 months after it has become operational. The Guide added that conducting a project post-implementation review that evaluates the success or failure of projects serves as an assessment. The review compares actual results against planned cost, returns, and risk. The results are used to calculate a final return on investment, determine whether any additional project modifications may be necessary, and provide lessons learned for changes to the agency’s capital programming processes and strategy. Finally, the National Research Council’s 1999 report stated that agencies such as the Department of Energy should transfer knowledge gained about cost estimating techniques, project review processes, change control mechanisms, and performance metrics from one project to another. FHWA has made progress since 2002 in improving its oversight efforts, including its direct oversight of major projects and its broader reviews of state management processes that are used to oversee states’ management of most other projects. For example, FHWA’s actions to enhance the capabilities of project oversight managers overseeing major projects and to incorporate risk assessments into its reviews of state management processes are both positive steps towards improving oversight. Most significantly, FHWA has established, for the first time, goals and measures that clearly make containing project costs and schedules an integral part of how FHWA conducts its oversight. However, despite promising results, FHWA’s efforts have also had limitations. FHWA still lacks a comprehensive approach to ensuring that its oversight of federal-aid highway projects supports the efficient and effective use of federal funds. A comprehensive approach would avail itself of best practices and would include (1) goals and outcome measures with activities and performance expectations set for its staff that are linked to these goals and measures; (2) an overall plan for FHWA’s oversight initiatives and activities that responds to past concerns raised about its program and is tied to its goals and measures; (3) workforce planning efforts that support the goals, measures, and overall plan; (4) centrally defined roles and responsibilities for key staff, such as oversight managers for major projects; and (5) the capability to track and measure costs over the life of projects in order to identify problems, help target resources, and transfer lessons learned. Without such a comprehensive approach, FHWA cannot ensure that its varied activities are resulting in tangible improvements in the quality of its oversight and in the performance of federal-aid projects. Furthermore, without a comprehensive approach, FHWA is not able to articulate what it wants its oversight to accomplish, the composition of its workforce to accomplish it, and how it will measure whether its efforts have or have not been successful. Thus, it is limited in its ability to ensure that its oversight efforts are meeting its organizational goals, that these efforts address concerns that have been raised, and that they result in more effective and efficient use of federal funds. Although broader questions exist about the structure of the federal-aid highway program and the role of FHWA, the agency will face considerable increases in its oversight responsibilities in the years ahead, particularly if the proposals made by DOT and considered by Congress become law. Given the limitations present today, questions exist about the ability of FHWA to effectively absorb these new responsibilities and to improve its oversight of the federal-aid highway program in the years ahead. Moreover, absent a comprehensive approach, FHWA is unlikely to be able to overcome the structural, organizational, and cultural challenges it faces in effectively overseeing the federal-aid highway program. In order to establish a comprehensive approach to project oversight, we recommend that the Secretary of Transportation direct the Administrator, FHWA, to take the following four actions: link FHWA’s day-to-day activities and the performance expectations set for its staff to its goals and outcome measures; develop an overall plan for its oversight initiatives that is tied to its goals and measures, along with priorities and time frames, and that includes workforce planning efforts that support these goals and measures; improve the use and performance of project oversight managers by centrally defining their role and responsibilities; and develop the capability to track and measure costs over the life of projects to help identify the extent of and reasons for problems, target resources, and transfer lessons learned. We provided a draft of this report to DOT and met with FHWA officials, including the Deputy Administrator, to obtain their comments on the draft. FHWA officials generally agreed with the facts and conclusions in the report and our characterization of the challenges FHWA faces in improving its project oversight. FHWA officials emphasized that although we highlighted potential drawbacks associated with both its culture of partnership with the states and its decentralized organization, this partnership and organization are also major strengths of the federal-aid program that will allow the agency to absorb potential new responsibilities, help overcome challenges, and improve program oversight in the future through a more comprehensive approach. FHWA officials did not take a position on our recommendations, but they stated that they would be taking them under advisement. They also suggested some technical and clarifying comments that we incorporated into the report as appropriate. We are sending copies of this report to the Honorable Norman Mineta, Secretary of Transportation. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://wwwgao.gov. If you have any questions about this report, please contact me at siggerudk@gao.gov, or (202) 512-6570 or contact Steve Cohen at cohens@gao.gov or (202) 512-4864. GAO contacts and acknowledgments are listed in appendix III. We reviewed the Federal Highway Administration’s (FHWA) approach to improving its federal-aid highway project oversight efforts since 2002, including (1) FHWA’s oversight-related performance goals and measures, (2) FHWA’s oversight improvement activities, (3) challenges FHWA faces in improving project oversight, and (4) best practices for project oversight. We reviewed FHWA’s oversight-related goals and measures by evaluating Department of Transportation (DOT) and FHWA strategic and performance plans, and supporting documents, from 2001 through 2004. We also reviewed FHWA’s annual performance reports from 2002 and 2003 and current OMB President’s Management agenda documents. We also reviewed FHWA and DOT fiscal year 2005 budgets. As criteria in reviewing this information we used GAO published guidelines and prior reports, including GAO’s 2001 Performance Guide and GAO’s 2003 Results Oriented Culture and GAO’s 2003 Human Capital reports. To review FHWA’s oversight improvement activities we documented and analyzed the status of FHWA’s various project oversight efforts since 2002 using FHWA’s FY 2004 Work Plan Summary from the major projects team (see app. II). We also reviewed FHWA’s use of financial information from its Financial Management Information System (FMIS) to track and analyze trends in cost growth on projects. We did not independently assess the reliability of FMIS data as the Department’s Inspector General has reported on weaknesses in FHWA’s financial management and reporting processes, most recently in November 2004 as part of the annual audit of DOT’s consolidated financial statements. In addition, our work focused primarily on FHWA’s use of FMIS data for oversight purposes, rather than relying on FMIS data to support our findings and conclusions. In addition, to document continued difficulty in tracking cost growth on projects, we randomly selected 14 contracts from seven division offices, each of which had an estimated total cost of between $25 million and $50 million. We then asked FHWA’s division offices to identify the project related to each contract (each contract was part of a different project, so there were 14 projects), and requested consolidated cost information on the 14 projects. FHWA took an average of more than 3 months—and up to 6 months—to provide us this information for 12 of the 14 projects, and it was unable to provide us complete cost information on the other 2 projects. Finally, we also interviewed officials at FHWA Headquarters, selected FHWA division offices, state departments of transportation, and other officials to document oversight implementation efforts. We performed work at seven FHWA division offices and states located in Colorado, Georgia, Missouri, Nevada, Pennsylvania, Washington, and Wisconsin. We selected these 7 FHWA division offices and corresponding states by selecting states that had a current or planned major project and some that did not; states with large as well as relatively small federal-aid highway programs in terms of funding; large and small FHWA division offices as measured by the number of staff; and division offices and states that FHWA and the American Association of State Highway and Transportation Officials (AASHTO) officials had recommended because of ongoing initiatives related to project oversight and management. To document and review the challenges FHWA faces in improving its project oversight we used our past work and interviewed FHWA headquarters, division office and state transportation program officials. We also interviewed AASHTO officials and state audit and evaluation organizations across the country. To address the use of best practices as a framework for the oversight of large highway infrastructure projects, we conducted a literature search in 2004 to identify best practices related to oversight management. The literature included our previous reports and guidelines on best practices related to project management. It also included publications from the Office of Management and Budget (OMB) that provided detailed guidance to federal agencies on planning, budgeting, acquisition, and management of capital assets and from the National Research Council addressing methods the Department of Energy could implement to improve its project management, including oversight of environmental restoration, waste management, and construction projects. From this literature search, we compiled the list of best practices that can provide FHWA with a comprehensive approach and basic framework for effectively overseeing highway projects. For the first practice of establishing measurable project oversight goals we used information from two of our reports related to the Government Performance and Results Act and another report related to leading practices in capital decision-making. For the second practice of establishing project oversight manager role and accountability based on oversight goals, we used our report related to improving project management for Amtrak and another of our reports on performance agreements. For the third practice of providing professional training and a career path, we used a National Research Council report on improving project management at the Department of Energy. For the fourth practice of learning lessons and transferring them, we used information from the National Research Council report mentioned above, the GAO report on leading practices in capital decision-making, another GAO report on program evaluations, and OMB guidance in Circular A-11 and its Capital Programming Guide. In addition to those named above, Sam Abbas, Catherine Colwell, Pat Dalton, Don Kittler, Alex Lawrence, Sara Ann Moessbauer, John Rose, Stacey Thompson, and Alwynne Wilbur made key contributions to this report. | The federal-aid highway program provides over $25 billion a year to states for highway and bridge projects, often paying 80 percent of these projects' costs. The federal government provides funding for and oversees this program, while states largely choose and manage the projects. Ensuring that states effectively control the cost and schedule performance of these projects is essential to ensuring that federal funds are used efficiently. We reviewed the Federal Highway Administration's (FHWA) approach to improving its federal-aid highway project oversight efforts since we last reported on it in 2002, including (1) FHWA's oversight-related goals and performance measures, (2) FHWA's oversight improvement activities, (3) challenges FHWA faces in improving project oversight, and (4) best practices for project oversight. FHWA has made progress in improving its oversight efforts since 2002, but it lacks a comprehensive approach, including goals and measures that guide its activities; workforce plans that support these goals and measures; and data collection and analysis efforts that help identify problems and transfer lessons learned. FHWA's 2004 performance plan established, for the first time, performance goals and outcome measures to limit cost growth and schedule slippage on projects, but these goals and measures have not been effectively implemented because FHWA has not linked its day-to-day activities or the expectations set for its staff to them, nor is FHWA fully using them to identify problems and target its oversight. FHWA undertook activities in response to concerns raised about the adequacy of its oversight efforts that have both promising elements and limitations. For example, while FHWA now assigns a project oversight manager to each major project (generally projects costing $1 billion or more) and identified skills these managers should possess, it has not yet defined the role of these managers or established agencywide performance expectations for them. While FHWA issued guidance to improve cost estimating and began collecting information on cost increases, it still does not have the capability to track and measure cost growth on projects. Finally, although FHWA received direction to develop a more multidisciplinary workforce to conduct oversight, it has not fully incorporated this direction into its recruiting and training efforts. FHWA faces challenges to improving its oversight that are in large part rooted in the structure of the federal-aid highway program and in FHWA's organization and culture. As such, they may be difficult to surmount. For example, because the program does not link funding to states with the accomplishment of performance goals and outcome measures, it may be difficult for FHWA to define the role and purpose of its oversight. Also, FHWA's decentralized organization makes it difficult to achieve a consistent organizational vision. Human capital challenges affecting much of the federal government have affected FHWA, particularly in its need to transform its workforce to meet its evolving oversight mission. FHWA faces an increased oversight workload in the years ahead as the number of major projects grows and if provisions Congress is considering to increase FHWA's responsibilities become law. Questions exist about FHWA's ability to effectively absorb these new responsibilities, overcome underlying challenges, and improve its oversight. We identified selected best practices that could help FHWA develop a framework for a comprehensive approach to project oversight. These include establishing measurable goals to objectively and quantifiably assess progress, making oversight managers accountable for the effective implementation of these goals, providing professional training, and collecting and transferring lessons learned. |
Congress enacted DBA in 1941 to provide workers’ compensation protection to employees of government contractors working at U.S. defense bases overseas. Subsequent amendments to DBA extended coverage to other classes of employees. DBA insurance provides covered employees with uniform levels of disability and medical benefits—or in the event of death provides benefits to their eligible dependents. DOL administers DBA, ensuring that workers’ compensation benefits are provided for covered employees and overseeing the claims process, among other things. Under DBA, U.S. government contractors and subcontractors are required to obtain DBA insurance for all employees, including foreign nationals, unless DOL issues a waiver. The cost of DBA insurance premiums, if allocable and reasonable, is generally reimbursable under government contracts. Under the War Hazards Compensation Act, the government also reimburses insurers for DBA benefits paid if the injury or death is caused by a “war-risk hazard,” provided that the insurer did not charge its customer a war-risk hazard premium. In addition to disability and death payments, war-risk hazard benefits include funeral and burial expenses, medical expenses, and reasonable costs necessary to process the claims. In providing for DBA coverage, agencies mainly use one of two approaches: a single insurer program or an open market system. For instance, USAID has a single insurer program. DOD has generally employed an open market system. USACE, however, had a single insurer program from 2005 to 2013, and then transitioned to an open market system. State adopted a single insurer program in response to a 1991 report by its OIG, which among other things, estimated that State could save approximately 40 percent of its DBA cost by moving to a single insurer program. Under a single insurer program such as the one implemented by State, an agency selects one insurer to provide DBA insurance through a competitively selected multiyear agreement. The resulting agreement sets premium rates for the agency’s contractors. The agreement may stipulate that rates will remain fixed during the entire term of the agreement, or that the rates may be adjusted up or down by mutual consent of the agency and the single insurer. To obtain DBA insurance, the agency’s contractors contact a broker specified by the single insurer. The broker obtains the contractor’s statement of work, assigns the contractor to a service category (for example, security), and collects premiums based on the contractor’s payroll and the premium rates for the service category. Under State’s single insurer program, DBA insurance was listed as a separate line item on the agency’s contract with each contractor. Under an open market system, contractors must independently obtain DBA insurance coverage from an insurer licensed by DOL to underwrite DBA insurance, and they usually do this through a broker. In this system, agencies do not play a role in setting premium rates. Each contractor’s selected insurer issues a DBA insurance policy that fixes premium rates generally for 1 year; after that, the rates and corresponding premiums can move up or down based in part on the insurer’s assessment of the contractor’s risk. The initial premium rate can vary by contractor based on the insurer’s assessment of the contractor’s risk. Contractors with a history of few or no claims can see a reduction in their premiums when renewing their DBA insurance, while contractors with a history of many claims can see an increase. To reduce the likelihood of claims, contractors can sometimes participate in a risk assessment and reduction program sponsored by their DBA insurer. According to State guidance issued in August 2012, it is no longer required that DBA insurance be identified as a separate line item on the agency’s contract with each contractor. Depending on the type of contract, a contracting officer may include the cost of DBA insurance separately. Figure 2 provides a process map of the two approaches. In 2008, State entered into a multiyear contract with a single DBA insurer. In September 2011, State and its single insurer agreed that the DBA contract would expire in July 2012. In June 2012, State posted a solicitation seeking to select a single insurer through a competitive source selection process. After State received no offers in response to its solicitation, it withdrew the solicitation 3 days before its existing contract expired and transitioned to a system requiring contractors to obtain DBA insurance on the open market. Leading acquisition practices emphasize the importance of allowing enough time to complete a solicitation, adequately documenting market research, and collecting and analyzing data, among other things. We found that State did not follow these leading practices. Specifically, State (1) had little time to complete the process of designating a DBA single insurer, (2) did not conduct a lessons learned assessment after agreeing to terminate its existing DBA contract, (3) did not adequately document market research, and (4) did not provide sufficient information to insurers. As a result, State had to quickly transition to an open market system without evaluating the relative costs and benefits involved. According to officials of insurers, brokers, and some contractors, after transitioning, State did not communicate its change in a timely manner to insurers, brokers, and contractors, which caused confusion among contractors and left some with little time to replace their expiring DBA policies. Within State’s Bureau of Administration, its Office of the Procurement Executive is responsible for maintaining State’s acquisition regulations and procedures, and its Office of Acquisitions Management manages most of the procurement for State domestically and overseas. The Office of Acquisitions Management was in charge of the competitive source selection processes for State’s single insurer program. In 2001, State first signed a contract with its most recent single insurer to provide DBA insurance to State contractors. In 2008, State signed a similar contract with the same single insurer, this one a 5-year contract consisting of a base year and 4 option years. The premium rates for the base year, or year 1, were fixed, but the contract allowed for adjustments in the rates by mutual agreement for years 2, 3, 4, and 5, with any adjustments to be based on cumulative losses sustained by State’s single insurer since the start of the contract. To ensure that contractors were appropriately assigned a DBA premium rate based primarily on risk, the 2008 contract provided that contractors would be assigned to one of four types of service rate categories (listed here from highest to lowest risk): security contractors involved in aviation-related work in defined hazardous areas such as Afghanistan and Iraq, other security contractors working in the same hazardous areas, construction contractors, and service contractors. Figure 3 shows key dates and information related to State’s 2008 single insurer contract. Prior to the start of year 3 of the contract in July 2010, State’s single insurer raised a concern about increasing DBA losses and the potential for further increases in losses. Among other things, State’s single insurer asked for an increase in premium rates for year 3. According to documentation from State and State’s single insurer, the losses far exceeded what the single insurer’s actuaries had anticipated and resulted from several factors, including an increase in the number of claims from State’s security contractors, many of which operated in hazardous areas. The single insurer also cited an increase in the number of injured employees that had not found suitable work upon release from their contract because of poor economic conditions or the nature of their State injuries, which could include post-traumatic stress disorder.acknowledged that DBA losses had increased, but disagreed about the extent of the losses and did not agree to allow the single insurer to increase its premium rates. State also did not agree to the single insurer’s request to obtain the services of an outside actuarial expert to validate the losses sustained by State’s single insurer. In May 2011, prior to the start of year 4 of the contract, State’s single insurer requested a lump sum payment of approximately $27 million to cover losses from year 3 of the contract as well as an increase in premium rates for year 4 of the contract. by a warning of legal action. State’s single insurer concluded that it had underestimated its 2010 losses and stated that a revised methodology it used to estimate losses showed that it should have charged higher premiums beginning with year 3 (July 2010-July 2011). State’s single insurer also stated that its revised loss estimate methodology, which used forecasted losses rather than actual sustained losses, was consistent with insurance industry standards. State’s single insurer termed the lump sum payment request a “request for equitable adjustment.” rate increases raised numerous questions about how the insurer had originally established its premium rates, along with other concerns. State and its single insurer met to discuss the insurer’s requests and exchanged correspondence but could not resolve their differences. Because State and its single insurer could not agree on a solution, in September 2011, State signed a memorandum with its single insurer agreeing that State would not exercise its right to continue the program through year 5, the final option year of the contract. Between then and July 2012, a number of key events occurred. These are noted below: September 12, 2011: State and its single insurer agreed that the DBA contract would expire on July 21, 2012. February 9, 2012: State posted a request for information containing a DBA-related questionnaire to insurers, brokers, and contractors on a federal procurement website. Nine responded by February 28, 2012. June 14, 2012: Seeking to designate a single insurer through a competitive source selection process, State posted a solicitation requesting that insurers submit proposals by June 27, 2012. June 14 through July 16, 2012: State twice extended the time frame for the solicitation, ultimately setting the deadline as July 16, 2012. Responding insurers stated that the solicitation contained objectionable provisions. On July 16, 2012, one insurer filed a bid protest with GAO. July 18, 2012: Having received no offers in response to its solicitation, State withdrew the solicitation. According to State officials, while State and its single insurer agreed that the DBA contract was to expire on July 21, 2012, the insurer also agreed to allow an extension of some contractor policies until April 2014. July 21, 2012: State’s existing single insurer program ended. July 22, 2012: State transitioned to an open market system. August 9, 2012: State issued a notice stating that as of July 22, 2012, State no longer had a single insurer. The notice also stated that contractors could purchase insurance from any insurer approved by DOL to provide DBA insurance. August 28, 2012: State formally notified its contracting officers that they were required to inform contractors of the transition to an open market system as DBA insurance policies expired. State officials said they lacked specific guidance for acquiring a new DBA contract but followed acquisition regulations to the extent practicable. The selection of a single provider of DBA insurance is not an acquisition because it does not involve the use of appropriated funds. Therefore, these selections are not subject to the FAR. State officials told us that in the absence of guidance on how to execute a DBA selection, they looked to the FAR and the Department of State Acquisition Regulation (DOSAR) as their guidance to the extent practicable to conduct what the agency labeled a “competitive source selection process.” For example, both the 2008 single provider agreement and the 2012 solicitation included multiple references to the FAR, and State used certain provisions of the FAR as criteria for determining whether to exercise options on the 2008 agreement. In the absence of specific State guidance for solicitations involving DBA we looked to leading selections and in light of the use of federal funds,practices set forth in the FAR and DOSAR, as well as in other State acquisition guidance to evaluate State’s management of the DBA transition. Moreover, in accordance with federal requirements, State has internal control standards that apply to competitively sourced acquisitions.internal controls apply to all State operations and administrative functions. In addition, GAO has in past reports discussed a number of other leading acquisition practices that are applicable to competitively sourced acquisitions. State had little time to complete its 2012 single insurer solicitation, and when it received no offers, State had to withdraw its solicitation only 3 days before its existing DBA contract was due to expire. We have reported on the need for agencies to establish time frames for acquisition planning, including measuring lead times for presolicitation and In addition, State guidance used by the Office of solicitation activities. Acquisitions Management states that sufficient time must be allowed to perform the many steps involved in the acquisitions process. State officials told us they began planning for the June 2012 single insurer solicitation well in advance; however, they did not issue the solicitation until about a month before their existing single insurer program was due to expire. The timeline in figure 4 shows how compressed State’s single insurer solicitation process became. Because State compressed its contracting efforts, it had little time to complete the solicitation process. In 2007, State did not complete the solicitation process in time to obtain a DBA single insurer; however, in that instance, State was able to reach an agreement with its single insurer to extend the contract for 6 months—an option that it did not have in this instance since it had already agreed with its single insurer that the existing contract would expire on July 21, 2012. State officials told us that their decision to transition to an open market DBA system primarily resulted from the lack of bids on its single insurer solicitation rather than from a policy assessment and decision based on an analysis of the costs and benefits of both approaches. State officials also told us that their decision to transition to an open market system was heavily influenced by discussions with DOL, which had included a proposal in its fiscal year 2014 budget to reform DBA by creating a government-wide self-insurance program. As a result, State officials said they viewed the transition to an open market system as a temporary measure. However, DOL officials told us that they decided to reevaluate whether a self-insurance program would be cost-effective after they had submitted their fiscal year 2014 budget proposal. In addition, State had little time to communicate with stakeholders, including contractors, its decision to transition to an open market system. State transitioned to an open market system on July 22, 2012, but did not formally communicate this internally or externally until August 9, 2012— about 3 weeks after the transition had occurred. According to an official of a national association that represents government contractors, and several contractors we interviewed, after State transitioned to an open market system, contractors raised a number of concerns regarding the transition, but State provided insufficient information to them. For example, according to the association official, contractors were initially unclear about whether and how long State’s single insurer would continue to honor existing policies and whether contractors that had submitted bids using an estimate of DBA insurance costs based on State’s single insurer program would be allowed to revise their bids. In general, officials of insurers, brokers, and some contractors told us that contractors were surprised and confused because State did not communicate its decision to transition to an open market system in a timely manner. These officials also said that State’s late communication left some contractors with little time to replace their expiring single insurer policies. In addition, while State’s own contracting officers were tasked with informing contractors of State’s decision to use an open market system to provide DBA insurance, most of the contractors we spoke with did not learn of the change from State. State formally notified its contracting officers of its decision to end its single insurer program and transition to an open market system through a procurement bulletin issued on August 28, 2012—over a month after the transition had actually occurred. The bulletin required contracting officers to notify contractors that State no longer had a single insurer program and that contractors had to obtain future DBA insurance from a list of insurers approved by DOL. However, all but 2 of the 10 contractors we spoke with said that State’s contracting officers did not notify them of the change to an open market system. Most said they learned about the change from their insurance brokers. State did not conduct a lessons learned assessment to inform its 2012 DBA solicitation, as suggested by leading practices, and though it conducted market research, it took limited measures to document the analysis and conclusions. State’s and GAO’s internal control standards maintain that significant events and information need to be clearly documented to ensure that management directives are carried out and to inform future decision making. One way agencies may meet this standard is through conducting a lessons learned assessment and market research for future competitive source selections. State encountered serious difficulties in implementing its 2008 single insurer contract but did not prepare an assessment detailing the lessons learned from the implementation of that contract. In 2004, 2008, and 2011, we reported that a knowledge base of important lessons learned from the acquisition process can help program and acquisition staff plan future acquisitions. In addition, when conducting procurements, agency heads prescribe procedures to ensure that knowledge gained from prior acquisitions is used to further refine requirements and acquisition strategies. While State’s Office of Acquisitions Management has developed suggested guidance for documenting and incorporating lessons learned from previous acquisitions into preparations for future solicitations, it did not do so until August 2013 after it issued its 2012 DBA solicitation. This guidance states that lessons learned should be noted and provided to contracting officials. It also states that lessons learned documents need not be extensive but should be developed when appropriate. U.S. Army Corps of Engineers’ (USACE) Transition to an Open Market System for Providing Defense Base Act (DBA) Insurance In October 2013, USACE transitioned from a single insurer program to an open market system. We found that, in contrast to State, USACE did follow some key aspects of acquisition guidance and leading practices. First, USACE allowed time for a transition to an open market system. In October 2012, it prepared a decision paper to justify its transition to the open market. It also prepared a business case assessment weighing the advantages and disadvantages of an open market system. On the basis of its analysis, USACE concluded an open market system to be more cost-effective for itself and for contractors than a single insurer program. Second, USACE adequately documented its market research. For example, it documented the data collection and analysis methods used, dates when research was conducted, analysis of vendor capabilities, and a conclusion based on that analysis. Third, USACE communicated its decision in a more timely manner than State. Officials said they developed a communication plan to guide the transition, including an August 23, 2013, letter informing contractors that after October 1, 2013, they would no longer obtain DBA insurance through USACE’s single insurer. USACE officials also said they sent notices to overseas contractors notifying them of the change. frames when staff used them, an analysis of potential sources, and a conclusion based on that analysis. In preparing for its June 2012 solicitation, State took some steps to conduct market research. State posted a request for information in February 2012 on a U.S. government website that publicizes federal procurement opportunities valued at over $25,000 to insurers, brokers, and contractors asking, among other things, whether State should continue its single insurer program, whether insurers charged minimum premiums, and whether small businesses would be adversely affected by a decision to transition to an open market system. Nine insurers, brokers, contractors, and small businesses responded to the request for information. State prepared a summary compilation of responses, but it contained no analysis and no conclusion to inform decision making. By contrast, USACE documented its market research, which included an analysis and conclusions to support its October 2013 transition to an open market system (see sidebar). In addition, State did not use the February 9, 2012, request for information to ask insurers to comment on two provisions that State later included as part of the June 2012 solicitation. These two provisions, along with the lack of certain data, were objectionable to insurers. State extended the deadline and responded to insurers’ questions regarding these concerns. The two provisions are described below. An “opt out” provision allowed contractors to obtain DBA insurance on the open market instead of through the single insurer if the contractor could demonstrate that it could purchase DBA insurance on the open market at a lower cost. Insurers expressed concern to State that this opt out provision would encourage larger contractors and those with fewer claims to opt out of the program, leaving the single insurer with a pool of contractors representing a much higher level of risk. During the solicitation process, insurers asked State to withdraw this provision. State acknowledged that insurers wanted this provision withdrawn, but during the solicitation process State did not agree to the insurers’ request. State officials told us this provision was included as a result of information, obtained through the request for information and other sources, indicating that some contractors could save money by purchasing insurance on the open market, and that it would allow contractors dissatisfied with the performance of the designated insurer to seek competitive alternatives if available. A “blanket coverage” provision required the single insurer to provide coverage to all subcontractors, regardless of whether they were explicitly identified or not in State’s contract with the prime contractor. State officials said they included this provision to ease the work of contracting officers, who would otherwise have to obtain proof of DBA coverage from individual subcontractors. In contrast, insurers expressed two concerns. One expressed concern that the provision contradicted applicable DOL regulations, making it harder for DOL to process claims. Authorized DBA insurers are required to report to DOL the names of every employer to which they have issued a DBA insurance policy. According to one insurer, if a covered employee is injured and reports the injury to DOL, DOL must identify the insurer that is required to provide benefits to the employee; however, this is difficult or impossible to do in the absence of information about which contractors and subcontractors are covered under the DBA policy. Additionally, insurers also expressed concern that not explicitly identifying all covered subcontractors could expose the insurers to unanticipated liabilities. Because State did not develop a lessons learned assessment, adequately document market research, and use the February 2012 request for information to ask insurers to comment on the “opt out” and “blanket” provisions, State did not have sufficient information about the provisions in its solicitation that insurers would find objectionable. This explains in part why State’s single insurer solicitation received no offers and motivated one insurer to file a bid protest with GAO citing among other things a lack of information necessary for bidders to compete intelligently and on an equal basis. In addition, State did not have analysis to enable it to determine whether an open market system best fit its needs. As part of its June 2012 solicitation, State provided some claims and premium data, but insurers expressed concern that the data were insufficient to enable them to reliably analyze State’s DBA insurance needs and estimate future claims and losses as a foundation for proposing premium rates. When conducting procurements, agencies acquire and provide sufficient data to potential service providers in order to ensure that contract requirements are clear and potential service providers have sufficient information to compete on a level playing field. Under the terms of the 2008 contract, State had required its single insurer to provide premium data quarterly and claims and premium data in semiannual, and annual reports. For the June 2012 solicitation, State provided prospective insurers with two spreadsheets—one providing data on historical premiums and the second providing data on claims. According to insurance industry officials, these data were needed to enable prospective insurers’ actuaries to propose premium rates. However, according to four different insurers, both spreadsheets were missing some key data such as payroll or claims data for certain years. In our review of the solicitation, we also observed that some premium data were missing. During the solicitation process, insurers asked State to supply the missing data, but State responded that it had provided all available data. Officials of two insurers we spoke with expressed concern that not providing these data gave an unfair competitive advantage to State’s previous single insurer, since only that insurer had access to the complete claims and premium history data. Our analysis of a sample of State contractors’ DBA rates during the transition from a single insurer program to an open market system showed that the rates increased as State moved to the open market system, but the increases fell in a similar range of those that were likely to have occurred if State had continued its single insurer program. Because State does not collect and analyze data on DBA costs in the open market system, we collected and analyzed data on DBA premiums and payroll from a random sample of State contractors active in fiscal years 2010 through 2013. We found that in our sample of contractors, effective DBA premium rates—rates that incorporate minimum premiums—increased for the median contractor by $1.98 per $100 of payroll as State transitioned from the single insurer program to the open market system. Our analysis also showed that the increase in effective DBA premium rates after the transition was comparable to the increase in effective DBA premium rates requested by State’s single DBA insurer, which said it had lost money under the prior contract. The dollar amount basis of a claim for damages under the terms of an insurance policy. From the claimant’s perspective, a loss is the monetary benefit the claimant is entitled to receive. terms).system, State’s contractors may be required to pay a minimum premium set by their insurer, which is the minimum dollar amount the insurer requires to write coverage for a contractor. DBA insurance industry representatives told us that minimum DBA premiums can range between $5,000 and $25,000 per policy. Hence, open market minimum premiums can effectively increase the price of insurance, as measured in dollars per $100 of payroll (i.e., the effective premium rate), particularly for smaller contractors with fewer contracts. The minimum dollar amount necessary to receive coverage under an insurance policy. the premium rate multiplied by payroll or (2) the minimum premium. Total premium divided by $100 of payroll. Because of the minimum premium, the effective premium rate may be higher than the premium rate. The increase in effective rates varied across contractor characteristics. We compared effective rates in the single insurer program with those in the open market across three sets of contractor characteristics: business size, contractor category (i.e., the nature of work performed by the contractor), and country, finding a statistically significant increase across many of these contractor characteristics. For example, for larger contractors in our sample, effective rates increased from $6.23 to $11.15 per $100 of payroll, on average; for small businesses in our sample, effective rates increased from $4.53 to $7.71 per $100 of payroll, on average. We did not find a larger increase in effective rates for small businesses compared with rates for larger contractors; this may be explained in part by differences in the type of work performed by small versus large contractors. For example, security contractors had a larger increase in effective rates compared with rates for services and construction contractors, and, as we discuss below, most of the security contractors in our sample were larger contractors. Table 1 shows the increase in the median effective premium rates for contractors in each contractor category as State transitioned from the single insurer program to the open market. Table 2 shows the increase in the median effective premium rates by country. The difference in rates shown in tables 1 and 2 suggest that State’s transition to the open market may have led to variations in DBA rates according to the level of risk associated with the contractor’s category and geographic location. For example, according to State and insurance industry officials, security contractors are exposed to greater levels of DBA risk, which could explain why the median rate for security contractors increased more than for other contractors in our sample. An increase in DBA rates for State contractors was likely to occur regardless of whether State had continued its single insurer program because State’s single insurer reported it was losing money under the 2008 DBA contract. According to e-mails and other documents provided by State, State’s single insurer repeatedly communicated to State that the insurer was operating at a loss under the terms of the 2008 DBA contract. Our analysis shows that the increase in State contractors’ DBA premium rates following the open market transition was comparable to the increase in DBA premium rates requested by State’s single insurer. We compared open market premium rates paid by our sample of State contractors with the rates we calculated that they would have paid under two hypothetical scenarios in which State continued the single insurer program for an additional year. The hypothetical scenarios are as follows: In the highest contract rates scenario, State and the single insurer continue the single insurer program, with the insurer charging all contractors the highest rates allowable in State’s 2008 single insurer contract. The 2008 DBA contract between State and its single insurer allowed for three tiers of premium rates, varying by State contractors’ total DBA loss experience. In the contract, higher losses would result in higher DBA premium rates. During the actual lifespan of the 2008 contract, State’s single insurer requested that the highest tier of premium rates be charged in years 3 and 4 of the contract; however, these requests were not granted because of the disputes between State and its single insurer discussed previously. In the renegotiated rates scenario, State and the single insurer continue the single insurer program but renegotiate premium rates. The single insurer charges security contractors a higher, renegotiated security rate. For most of the contractors in our sample, the effective rates they paid in the open market were comparable to the effective rates they would have paid under the hypothetical scenarios described above. Figure 5 shows that there was a high degree of overlap between the likely ranges for the median open market effective rate calculated from our sample data and the likely ranges for the median effective rate in the two hypothetical scenarios. For services and construction contractors—48 of the 56 contractors in our sample—our analysis did not reveal a statistically significant difference between the median effective rate paid by contractors in the open market and the median effective rates that they were likely to have paid if State had continued the single insurer program. For the 8 security contractors in our sample, our analyses showed that open market effective rates were significantly lower than effective rates under the hypothetical scenarios. Open market premium rates for security contractors in our sample could have been lower than the rates calculated for them under the hypothetical scenarios in part because insurers in the open market may choose to insure only security contractors with a good loss history; in other words, contractors in our sample may have benefited from the open market to an extent that may not hold across all security contractors. Brokers told us that the open market allowed insurers to be selective about the customers they choose to insure. Additionally, the security contractors in our sample were larger contractors that may have been able to pool the risk of security contracting alongside work with lower risk exposure to reduce their DBA premiums. Only 1 of the 8 security contractors in our sample was a small business, and 1 security contractor told us that its payroll incorporated both security and clerical employees. In appendix II we describe the methodology of the analysis we conducted. Our analysis of existing federal procurement data did not show a clear effect on contractors that are small businesses resulting from State’s transition to an open market system, but insurers and contractors have expressed concern that the change has had or could have an adverse effect. State has not conducted an assessment of whether its 2012 transition affected small businesses’ competitiveness. Our analysis of information provided by insurers, brokers, and contractors shows that there is a potential for adverse effects. State’s policy calls for it to maximize opportunities for small businesses. Without conducting an assessment, State cannot be assured that it is meeting its policy goal of maximizing opportunities for small businesses. Several audit reports, including State OIG’s 1991 and 1997 reports, as well as DOD’s 2009 report, discussed the advantages and disadvantages for small businesses when obtaining DBA insurance through a single insurer program versus an open market system. Table 3 describes the advantages and disadvantages based on our interviews with insurers, brokers, contractors, and small businesses as well as our analysis of State’s request for information in preparation for its 2012 single insurer solicitation. State’s policy calls for it to maximize opportunities for small businesses to participate in the acquisitions process. State officials told us that State first adopted a single insurer program for DBA in 1991 in part to make it easier for small businesses to obtain DBA insurance. According to the DOSAR, State is to provide maximum opportunities for U.S. small businesses to participate in the acquisitions process. As required by law, State has created an Office of Small and Disadvantaged Business Utilization (OSDBU) to help small businesses participate in State contracting. State’s OSDBU also works to ensure agency compliance with legal requirements contained in the Small Business Act (for example, determining annual small business goals for the agency). State awarded $2.1 billion in contracts to small businesses for work performed overseas in fiscal years 2009 through 2013. Approximately one of every five newly awarded contracts for work performed overseas during this period was completed by small businesses, which provide State with a broad range of services worldwide. Our analysis of federal procurement data was inconclusive regarding the effects of State’s transition on small businesses’ ability to obtain State contracts, and the full effects might not be known for several years. We analyzed State’s federal procurement data to track the percentage and value of new awards that went to small businesses before and after the shift to the open market DBA rate. We found that for all State’s federally procured work performed by U.S. contractors outside the United States in fiscal years 2009 through 2013, the number of new contracts going to small businesses (as a percentage of all new contracts) declined from 22 percent to 15 percent. For work performed during the same period in only those countries affected by State’s transition to an open market system, the decrease was nearly identical.decrease in fiscal year 2013, the first complete fiscal year after the transition for which data are available. However, that decrease may be part of the overall decline that we identified. In addition, when we looked at contracts over $25,000 in value, we found that the percentage of contracts going to small businesses actually increased by about 1 percent There was a 2 percentage point in fiscal year 2013. The major challenge in conducting this analysis is that federal procurement data do not specify which awards required DBA insurance. Therefore, we could not isolate those awards and examine data on them before and after the transition to the open market. Moreover, the full effects of the 2012 transition to the open market on small businesses may not be apparent for several more years. Cognizant State officials told us that they had not observed any impact of the transition to an open market system on small businesses, nor had they received complaints regarding DBA insurance after the transition from either small or large businesses. As noted previously, State issued a request for information in February 2012 and received responses from two insurers, two brokers, and five contractors. Our analysis of the responses showed that three of the four insurers and brokers responded that transitioning to an open market could negatively affect small businesses in two ways: (1) Insurers could deny them coverage and (2) insurers could charge minimum premiums significantly higher than what the small businesses paid under the single insurer program. The one small business that responded to State also reported a potential for adverse effects, citing difficulty obtaining coverage and higher premium rates. State officials told us they reviewed the responses but did not document their review or any conclusions they reached as a result of their review. Thus far, according to State officials, State’s primary way of assessing the effects of its transition has been to informally monitor feedback received from small businesses. State officials told us that they have not received any complaints or concerns from small businesses about having to obtain DBA insurance through an open market system. According to State, the absence of complaints indicated that the transition to an open market system had not adversely affected small businesses. However, State officials also said that businesses may be reluctant to discuss uncertainties about DBA costs with State personnel. Industry officials noted that if the transition had not already had a negative effect on small businesses’ competitiveness, it may in the future. Insurers, brokers, and contractors told us that the existence of minimum premiums under the open market system could have an adverse effect on small businesses’ ability to perform work for State. As noted previously, we found the same concerns prevalent among the responses that State received for its February 2012 request for information. Data we reviewed from a major DBA broker showed that several small businesses ended up with a lower premium rate (per $100 of payroll) in the open market; for example, one small service business saw its rate decrease by half. However, the data also showed that in selected instances, small businesses were charged substantially more for premiums after State transitioned to an open market system. For example, one small service business’ DBA premium was reported to have risen from $280 to $9,000. Another small business reported that its premium rose from $883 to $7,500. In both cases, the increase was due to minimum premiums. While we cannot determine what effect State’s transition had on small businesses based on our analysis of federal procurement data, our discussions with insurance industry officials showed that there is a potential for adverse effects. To learn more about those effects, we interviewed a judgmentally selected sample of eight small businesses and two large contractors that worked with small subcontractors. Four of the small businesses reported experiencing no adverse effects. One small business was able to retain its previous single insurer premium rate in the open market system. Another small business told us that State’s transition had no impact on it because the increase in its premium rate was minor. However, six of the small businesses we interviewed indicated a potential for adverse effects, two of which (minimum premiums and difficulty obtaining DBA insurance) were raised by respondents to State’s February 2012 request for information from insurers, brokers, and contractors. After State’s transition to an open market system, 3 of the 8 small businesses were required to pay minimum premiums to their insurer that increased the cost of their DBA coverage. Additionally, both of the larger contractors stated that small businesses that they subcontract with may have difficulty obtaining DBA insurance. In nearly all cases, the cost of DBA insurance (including minimum premiums) is passed on to State. However, two small businesses expressed concern that the additional costs imposed by minimum premiums would negatively affect their competitiveness against larger contractors who can more easily absorb higher DBA costs. Two of the small businesses expressed concern that under a multiyear firm-fixed- price contract, if their DBA costs increased in the second or third year, they would be forced to absorb the added cost. One small business said that an increase in its DBA premium rate forced an early cancellation of a multiyear State contract. In our interviews, some small businesses expressed uncertainty about whether State would reimburse certain DBA costs; however, State officials stated that most DBA costs are reimbursable. For example, one contractor told us that it had to absorb the increase in costs resulting from State’s transition to an open market DBA requirement. In October 2014, State officials told us that contractors’ concerns about reimbursement for increases in DBA costs were unfounded because State reimburses the cost of DBA insurance. As noted above, State officials said that contractors may be reluctant to discuss uncertainties about DBA costs with State personnel. However, State officials also said that in some cases involving work on State posts overseas, contractors might not be reimbursed for DBA costs. For example, one small construction firm was required to purchase DBA insurance, which included a minimum premium, in order to complete a site assessment at a State facility overseas before bidding on the contract with State. In that instance, the contractor would not have been reimbursed for the cost of DBA insurance had it not won the contract. DBA-required workers’ compensation insurance claims have increased dramatically in recent years as the use of overseas contractors has expanded. From July 2008 to April 2014, State expended over $212 million to reimburse contractors for the cost of DBA premiums. When considering whether to adopt a single insurer program or an open market system for DBA insurance—the two main approaches to providing this coverage—agencies have a responsibility to conduct the acquisition process in a way that adequately manages U.S. funds. State’s transition from a single insurer program to an open market system was not the result of a policy decision based on an analysis of costs and benefits of both approaches but was required because its 2012 solicitation to select a single insurer failed to yield any bids. State was unsuccessful in its solicitation in part because it lacked guidance for how to conduct a competitive source selection process that did not involve the use of appropriated funds. Moreover, State did not make full use of relevant leading practices, including its own standards for internal control and federal and State Department acquisition regulations. Consequently, State was forced into the open market system without knowing if that system was better suited for the agency and its contractors. As a result, State cannot be assured that its transition to an open market system was in the best interest of the agency in a period when the number of DBA claims and the amount of DBA benefits paid to claimants is at an all-time high. The Secretary of State should direct State’s Office of the Procurement Executive to take the following actions: 1. Evaluate whether a single insurer program or an open market system best serves its needs. 2. Incorporate leading practices into any future single insurer solicitations by determining whether existing guidance could be used, or by developing guidance based on leading practices in federal and State Department acquisition regulations and State internal control standards. 3. Conduct an assessment to determine how State’s transition to an open market DBA system is affecting small businesses. We provided a draft of this report for review and comment to the Department of State, DOD, DOL, and USAID. In its comments (included in their entirety in appendix III), State concurred with all three recommendations. DOD, DOL, and USAID did not provide any comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, the Secretary of Defense, the Secretary of Labor, the Administrator of the U.S. Agency for International Development, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or courtsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our review focused on the Department of State’s (State) transition from a single insurer program to an open market system for the provision of Defense Base Act (DBA) insurance. This report examines (1) how State managed the transition from a single insurer program to an open market system, (2) the extent to which this change affected DBA premium rates paid by State’s contractors, and (3) the extent to which this change affected small businesses. To provide background and context for our analysis, we reviewed prior GAO reports that discuss DBA insurance, as well as reports produced by Offices of Inspector General of State, the Department of Labor (DOL), the United States Agency for International Development (USAID), and the Special Inspector General for Afghanistan Reconstruction. In addition, we reviewed other documents from the Department of Defense (DOD), DOL, and insurance industry experts that discuss the history of DBA insurance and emerging trends in the DBA market. We also conducted interviews with officials from insurance companies, insurance brokers, contractors, and small businesses (hereafter referred to as insurers, brokers, contractors, and small businesses). To examine how State managed its transition from a single insurer program to an open market system, we reviewed the Federal Acquisition Regulation (FAR) and the Department of State Acquisition Regulation (DOSAR). In addition, we reviewed State’s Foreign Affairs Manual (FAM), Foreign Affairs Handbook (FAH), and acquisition source selection GAO has issued a number of reports discussing leading manual.acquisition practices, and we also reviewed these reports. We obtained a number of State and insurance industry documents and met with representatives of State, insurers, brokers, contractors and small businesses, as well as a national association that represents government contractors. We reviewed State’s 2008 DBA single insurer contract and its 2012 solicitation. We compared the applicable regulations, guidance, and leading practices with what we learned from the 2008 contract, 2012 solicitation, and other documentation. We met with officials from DOL to discuss that agency’s role in the process. In October 2013, the U.S. Army Corps of Engineers (USACE) also made a transition from a single insurer program to an open market system, and we met with USACE officials to discuss how they managed their transition. We also reviewed documentation provided by USACE officials that shows how they managed their transition to an open market system. To examine the extent to which State’s transition affected the DBA premiums and premium rates paid by contractors, we collected data on DBA premiums under State’s single insurer program and in the open market. To determine a sample of contractors for our analysis, we obtained a list of State contracts with a principal place of performance outside of the United States from fiscal years 2010 through 2013 from State. We also obtained data from State on the contractors’ country of location, the contracting officers’ determination as to whether the contractors were classified as small businesses, and the contractors’ contact information. To determine the population of contractors for the analysis, we applied the following selection criteria to the total dataset: 1. included contractors active throughout State’s DBA policy transition 2. included contractors who filed procurements through State’s Office of 3. included contractors in Afghanistan, Iraq, Pakistan, South Africa, and 4. excluded unnamed contractors (such as “Miscellaneous Foreign Awardees”); and 5. included the longest-performing contract per vendor in each country. We identified the set of countries by analyzing country summary reports of contract actions for State and USAID in the Federal Procurement Database System-Next Generation (FPDS-NG) for fiscal years 2010 through 2013, identifying the top 10 countries for each agency in terms of number of contract actions and choosing the countries that appeared in the top 10 list for both agencies. After applying the above selection criteria, the total population of contractors in the remaining dataset was 164; conclusions drawn from statistical tests on the resulting sample are generalizable to the population of 164 contractors that meet these selection criteria. Out of this population, we selected a simple random sample of 111 contractors, asking for information on DBA rates, premiums, and payroll during the single insurer program and during the open market. We queried for country-specific contractor data by requesting DBA information for a contract task order. Thirty contractors did not respond to our inquiry. However, in total, we did not receive DBA rate data from 40 contractors, as 10 contractors who provided responses said that DBA did not apply to them for various reasons. Among nonrespondents, 11 were in Afghanistan, 8 were in Iraq, 5 were in Pakistan, 3 were in Thailand, and 3 were in South Africa; 11 were small businesses and 19 were not. We did not see a clear pattern in the geographical location or size of nonrespondent contractors. Table 4 provides the counts of the number of responses we received to our data collection instrument. We determined that it was most appropriate to use samples of contractors that had different characteristics for the different analyses that we conducted. In our analysis of the increases in effective premium rates (defined as the total premium divided by hundreds of dollars of payroll) from the single insurer program to the open market, we restricted the sample to the 36 contractors who reported paying premiums and payroll in both the single insurer program and in the open market. This was done to control for contractor-specific characteristics in the pre-transition to post-transition (pre-post) analysis. In our static analysis (i.e., we did not look at pre-post differences) comparing open market effective rates to effective rates that would have been charged in hypothetical single insurer program scenarios, we were able to incorporate additional data from a larger sample of contractors that reported paying premiums and payroll in the open market. This increased the sample size for this analysis to 56. For more details on our methodology, see appendix II. We also conducted interviews with insurers, brokers, contractors, and small businesses. To assess the reliability of the FPDS-NG data, we interviewed knowledgeable officials and reviewed publicly available documentation regarding the collection and use of the data. We determined that the data were sufficiently reliable for the purposes of this report. Contractor data on DBA premiums, premium rates, and payroll were self-reported; however, we reviewed the data provided, performed logical tests on it, conducted some follow-up with the contractors, and determined they were sufficiently reliable for our analysis. Further technical discussion of the data and analysis for this objective is provided in appendix II. To assess the extent to which State’s transition affected small businesses, we interviewed officials from State, insurers, brokers, and a contractors’ association that represents government contractors; analyzed data contained in the FPDS-NG, and interviewed officials of 2 large contractors and 8 small businesses. To determine what regulations and guidance State has regarding small businesses’ participation in agency contracting, we examined the FAR, DOSAR, and FAH. To determine the extent to which State received and acted upon information indicating how its transition might affect small businesses, we spoke with State officials from the Office of the Procurement Executive, Office of Acquisitions Management, and Office of Small and Disadvantaged Business Utilization (OSDBU). The director of State’s OSDBU, whose duties include serving as an advocate for small business participation in State contracting, told us that State is meeting its small business goals as set by the Small Business Administration. However, we found the annual Small Business Administration reports, which include measures of federal agencies’ compliance with small business contracting goals, to be of limited use because the measures are based on contract awards with a In order to better understand how U.S. principal place of performance.small businesses might be affected by State’s transition, we interviewed 5 of the 6 largest DBA insurers in the United States. These 6 insurers processed almost all DBA insurance claims by U.S. contractors in 2013. In addition, we interviewed 3 of the largest brokers that connect contractors to insurers and a national association that represents government contractors. We also analyzed FPDS-NG data to determine the extent to which the proportion of small businesses hired by State changed after the transition. Because the FPDS-NG does not indicate which contracts or contractors are required to obtain DBA insurance, we imposed the following limits to obtain relevant data for analysis. We (1) considered only new awards, (2) considered only U.S. firms, (3) did not consider awards for work performed in the United States, (4) distinguished between countries where DBA applies and where DBA does not apply, and (5) distinguished between contracts valued above and below $25,000. We performed a number of other analyses, including subsets of contractors who performed contracts valued above $25,000 in countries with a DBA requirement, but the results were similarly inconclusive. There are limitations on the data we used. For example, it is not possible to identify which contracts contain a DBA requirement without examining the individual contracts themselves. There is no requirement that agencies include DBA-related data in their mandatory procurement data reporting to the FPDS-NG, and State has not established a mechanism for collecting data on DBA costs in an open market system. It is therefore not possible to identify exactly which State contractors are affected, if at all, by State’s transition. Moreover, a limited amount of time has passed since the transition, so the window is small for observing effects. The transition occurred on July 21, 2012, but the most recent FPDS-NG data extend only to September 30, 2013. We determined that the FPDS-NG data were sufficiently reliable for the purposes of this report. We also interviewed a judgmental sample of 10 State contractors (8 small businesses and 2 large contractors) to discover how, if at all, State’s transition affected them and to uncover details that a quantitative analysis of FPDS-NG data might not reveal. We selected 8 small businesses using the judgmental selection criteria listed below. We limited our sample to contractors that: were required to obtain DBA coverage for work performed overseas; signed contracts with State both before and after State’s transition to an open market system; performed work for State overseas in countries affected by the transition to an open market system;represented a range of contract dollar values and type of work performed (i.e., construction, security, or services); and were certified as small businesses by the Small Business Administration. We excluded contracts that were valued less than $25,000 in some of our analyses because we judged those to be less likely to include a DBA requirement because of the nature of the work performed. Because the most recent FPDS-NG data are from fiscal year 2013, we developed a list of contractors who were newly awarded State contracts in fiscal year 2013 and then searched FPDS-NG data from fiscal years 2010 through 2013 to determine whether they matched the criteria above. From the resulting narrow pool of contractors, we judgmentally selected 8 small businesses to interview. Since DBA premium rates varied in State’s single insurer program according to one of four rate categories, we selected contractors in different rate categories to approximate the representation of service types and contract sizes among all State contractors.we intended to use these interviews to better understand the complexity Because and details of how small businesses experienced the transition and to uncover any details potentially not revealed in our analysis of federal procurement data—not to draw generalizable conclusions about all of the small businesses that contracted with State—we determined that eight interviews would be sufficient. To gain an additional perspective and supplement the testimony of the 8 small businesses, we interviewed 2 large contractors that work regularly with small subcontractors. We selected these 2, who meet the first 4 criteria listed above, because their responses to our request for premium rate data indicated they had insight into how the transition may have affected small businesses. The interviews, which took place in person or by telephone conference, used a standard set of questions, containing both open- and close-ended questions that allowed contractors to share details of their experiences both before and after State’s transition and allowed us to compare their testimony. In all 10 interviews, we spoke with a representative whose job duties required knowledge of DBA insurance requirements (for example, a chief operating officer, accountant, or contract manager). Because this is a nongeneralizable sample, results cannot be used to make inferences about the entire population of small businesses that contracted with State during the period covered by our review. To examine our second objective, the impact of the Department of State’s (State) transition to the open market system on premiums paid by State’s contractors, we collected data on contractors’ Defense Base Act (DBA) insurance premiums, rates, and payroll during State’s single insurer program and in the open market. Data on premiums, rates, and payroll were collected from a simple random sample of contractors in Afghanistan, Iraq, Pakistan, South Africa, and Thailand. We identified the set of countries by analyzing country summary reports of contract actions for State and the U.S. Agency for International Development (USAID) in the Federal Procurement Database System-Next Generation (FPDS-NG) for fiscal years 2010 through 2013, identifying the top 10 countries for each agency in terms of number of contract actions and choosing the countries that appeared in the top 10 list for both agencies. We identified the population of contractors in each country by obtaining from State a full list of State contractors whose principal place of performance was outside the United States. We also requested data on the contractors’ country of location, the contracting officers’ determination of whether the business was classified as a small business, and the contractors’ contact information. We restricted our analysis to contracts that were procured through State’s Office of Acquisitions Management. We chose a random sample of contractors in the 5 countries identified above and inquired about the DBA premiums, payroll, and premium rates during the single insurer program and in the open market system. When the task order was from an indefinite delivery/indefinite quantity contract and the contractor submitted DBA information for the overall indefinite delivery/indefinite quantity contract, we applied the DBA data to the contractor in that country. To assess the reliability of the FPDS-NG data, we interviewed knowledgeable officials and reviewed publicly available documentation regarding the collection and use of the data. We determined that the data were sufficiently reliable for the purposes of this report. Contractor data on DBA premiums, premium rates, and payroll were self-reported; however, we reviewed the data provided, performed logical tests on them, conducted some follow-up with the contractors and determined the data were sufficiently reliable for our analysis. In total, 46 contractors reported their total payrolls and total premiums in the single insurer program, and 57 contractors reported their payrolls and premiums in the open market system. 36 out of 81 contractors reported paying premiums and payroll in both the single insurer program and the open market system; 10 contractors reported paying premiums and payroll in the single insurer program only. An additional 21 contractors reported paying premiums and payroll in the open market system only. the premium rate multiplied by payroll or (2) the minimum premium. Total premium divided by $100 of payroll. Due to the minimum premium, the effective premium rate may be higher than the premium rate. In this section we present a graphical depiction of the distribution of the DBA premiums and effective premiums paid by our sample of State contractors. Histograms (i.e., bar charts that show how frequently data occur in certain intervals) are provided for the overall sample, as well as disaggregated by contractor category. The histograms in figures 6 through 9 show that the distribution of premiums in both the single insurer program and the open market are skewed toward higher values on the right. As a result, we supplemented our analysis in the second findings section of the report with statistical methods that are less sensitive to skewed data. For example, we conducted two different tests for statistical significance: (1) sign tests, which test for differences in medians and (2) Wilcoxon signed ranks tests, which test for differences in overall distributions and calculated confidence intervals for the median effective rates. Because the original sample was not drawn from a stratified random sample, some categories contain relatively fewer observations than others. We analyzed trends in contractors’ DBA premiums before and after State’s transition to an open market system, and examined how premium rates varied by contractor category, country, and business size. We also compared open market DBA rates to rates that would have prevailed in two hypothetical scenarios in which State continued the single insurer program. The two hypothetical scenarios are computed as follows: 1. In the highest contract rates scenario, State and the single insurer continue the single insurer program, with the insurance carrier charging all contractors the highest rates allowable in the 2008 contract along with a lump sum amount to compensate the insurance carrier for historical excess losses. This lump sum amount, which is the same for all contractors, represents a retroactive adjustment of approximately $27 million (divided equally amongst overseas contractors), paid to the single insurer as compensation for performing the second option year of the 2008 DBA contract at the lowest tier of prices when, according to the insurer, the highest tier of rates should have been charged. 2. In the renegotiated rates scenario, State and the single insurer continue the single insurer program, but renegotiate premium rates. The single insurance provider charges security contractors the higher, renegotiated security rate. The new rate is the average of the higher rate requested by the single insurer, and a rate proposed by State as its original negotiating position. All other contractors pay the same rate they were paying in the third option year. All contractors, regardless of contractor category, pay a lump sum to compensate the insurance carrier for historical excess losses. The lump sum represents the retroactive adjustment described in the scenario above. To determine whether premium rates in the hypothetical scenarios described in the report differed from open market premium rates, we conducted statistical tests of the difference between open market and hypothetical rates, by contractor category. We conducted two different tests: (1) sign tests, which test for differences in medians, and (2) Wilcoxon signed ranks tests, which test for differences in overall distributions. In table 8, we report results and p-values for each test. With the exception of rates for security contractors, we were unable to detect a robust, statistically significant difference between open market effective DBA rates and effective rates under the hypothetical scenarios. Figure 10 shows confidence intervals for the median effective DBA premium rates that State contractors reported having paid in the open market and the median effective rates we calculated they would have paid under two hypothetical scenarios. A confidence interval is a measure of the reliability of an estimate. To calculate the confidence intervals shown in figure 10, we ran quantile (median) regressions of effective rates for subsamples of the data versus a constant term; the confidence interval on the constant term represents the confidence interval for the median effective rate. For services and construction contractors, our analysis did not reveal a statistically significant difference between the median effective rate paid by contractors in the open market and the median effective rate that they were likely to have paid if State had continued the single insurer program. For security contractors, our analysis showed that open market effective rates were significantly lower than effective rates under the hypothetical scenarios, which is reflected in the complete separation of the confidence intervals for this contractor category in figure 10. Open market premium rates for security contractors in our sample could have been lower than the rates calculated for them under the hypothetical scenarios in part because insurers in the open market may choose to insure only security contractors with a good loss history; in other words, contractors in our sample may have benefited from the open market to an extent that may not hold across all security contractors. Brokers told us that the open market allowed insurers to be selective about the customers they choose to insure. Additionally, the security contractors in our sample were larger contractors that may have been able to pool the risk of security contracting alongside work with lower risk exposure to reduce their DBA premiums: only 1 of the 8 security contractors in our sample was a small business, and 1 security contractor told us that its payroll incorporated both security and clerical employees. Further analysis with a larger sample of contractors would be necessary to conclusively determine whether open market premium rates differed from those under a single insurer program. Although not discussed in the body of the report, an additional analysis that we conducted compared DBA premium rates for State contractors with those of other agencies with single insurer programs, namely USAID and the U.S. Army Corps of Engineers (USACE). We found that for State contractors active from fiscal year 2010 through fiscal year 2013, State contractors’ effective DBA premium rates were generally higher than those of contractors for USACE and USAID (see table 9). According to agency officials and industry experts, the difference in premiums reflects various factors, such as risk exposure, nature of work, and size of risk pool. Over the same time period, the growth rate of State contractors’ DBA premium rates was higher than those of contractors for other U.S. agencies, as USAID’s DBA rates remained fixed according to the terms of its single insurer contract, and USACE’s DBA rates decreased for services and construction contractors, and remained constant for aviation and security contractors. However, as indicated in our analysis above, the increase in State’s DBA rates were, in general, not statistically different from the increases under two hypothetical scenarios in which State continued its single insurer program. Michael J. Courts, (202) 512-8980 or courtsm@gao.gov. In addition to the contact named above, Thomas Costa (Assistant Director), José M. Peña, III (Analyst-in-Charge), Gezahegne Bekele, Sonja Benson, Gergana Danailova-Trainor, David Dayton, Teresa Abruzzo Heger, Martin De Alteriis, Mark Dowling, Etana Finkler, Jeff Hartnett, Fang He, Julia Kennon, Ben Nelson, Kyerion Printup, John O’Trakoun, Jerry Sandau, Christina Werth, and Timothy Young made key contributions to this report. Standards for Internal Control in the Federal Government. GAO-14-704G, Washington, D.C.: September 2014. Market Research: Better Documentation Needed to Inform Future Procurements at Selected Agencies. GAO-15-8. Washington, D.C.: October 9, 2014. Iraq and Afghanistan: State and DOD Should Ensure Interagency Acquisitions Are Effectively Managed and Comply with Fiscal Law. GAO-12-750. Washington, D.C.: August 2, 2012. Acquisition Planning: Opportunities to Build Strong Foundations for Better Services Contracts. GAO-11-672. Washington, D.C.: August 9, 2011. Department of Homeland Security Improvements Can Further Enhance Abilty to Acquire Innovative Technologies Using Other Transaction Authority. GAO-08-1088, (Washington, D.C.: September 23, 2008). Defense Contracting: Progress Made in Implementing Defense Base Act Requirements, but Complete Information Is Lacking. GAO-08-772T. Washington, D.C.: May 15, 2008. Defense Base Act Insurance: Review Needed of Cost and Implementation Issues. GAO-05-280R. Washington, D.C.: April 29, 2005. Homeland Security: Further Action Needed to Promote Successful Use of Special DHS Acquisition Authority. GAO-05-136. Washington, D.C.: December. 15, 2004. Auditing and Financial Management: Standards for Internal Control in the Federal Government. GAO-AIMD-00-21.3.1. Washington, D.C.: November 1, 1999. | DBA requires U.S. government contractors to buy workers' compensation insurance for most employees working overseas. The cost of this insurance, if allowable under federal regulations, is generally reimbursable under government contracts. From 1992 until 2012, State had a contract with a single insurer to supply all State's contractors working overseas with DBA insurance. In July 2012, State's single insurer program ended after State unsuccessfully sought to solicit a new DBA single insurer agreement and State transitioned to a system requiring its contractors to obtain DBA insurance on the open market. However, concerns were raised about the transition and its impact on State's costs and on small businesses' competitiveness. To address these objectives, GAO was asked to review State's transition. This report assesses (1) State's management of the transition to an open market system, (2) the change's effect on contractors' premium rates, and (3) the change's effect on small businesses. GAO analyzed State documents; reviewed federal and State contracting regulations; analyzed premium rate data and federal contracting data; and interviewed officials from State, the insurance industry, and contracting firms. The Department of State (State) did not follow leading acquisition practices in transitioning from a single insurer Defense Base Act (DBA) program to an open market system. Leading practices emphasize adequately documenting market research, allowing enough time to complete a solicitation, and collecting and analyzing data to select among alternatives, but State took limited measures to document the market research it performed and had little time to complete its 2012 solicitation. State included provisions in the solicitation to which insurers strongly objected, received no offers, and had to cancel the solicitation 3 days before its existing single insurer contract was to expire. As a result, State had to quickly transition to an open market system without weighing the relative costs and benefits to determine which insurance system best served its needs. Until State conducts such an evaluation, it cannot be assured that the open market system is the better alternative, and unless State incorporates leading practices into any future single insurer solicitations, it risks a similar outcome. GAO found that State contractors' DBA premiums increased following the transition, but the increases were in a range similar to those likely to have occurred if State had continued its single insurer program. For example, median DBA premium rates increased by $1.98 per $100 of payroll. GAO analysis also shows that the increase in DBA premium rates after the transition was in a range comparable to the increase in DBA premium rates requested by State's single DBA insurer, which said it had lost money under the prior contract. Existing data do not show a clear effect on small businesses resulting from State's transition to an open market system, but insurers and contractors have expressed concern that the change has had or could have an adverse effect. GAO analysis of federal procurement data from fiscal years 2009 through 2013 found a decrease in the percentage of contracts awarded to small businesses, but GAO could not link this to State's transition. Information GAO gathered from insurance industry officials and contractors shows that there is a potential for adverse effects, for example, denial of coverage and higher effective premium rates. State's policy is to maximize opportunities for small businesses, but it has not assessed whether its transition to an open market DBA system is affecting those opportunities. Without such an assessment, State cannot be assured that it is meeting its policy goal of maximizing opportunities for small businesses. State should (1) determine whether an open market system best suits its needs, (2) incorporate leading practices into any future single insurer solicitation, and (3) assess the effects of its transition on small businesses. State concurred with GAO's recommendations. |
Since fiscal year 2011, DHS has used changes in the number of apprehensions on the southwest border between ports of entry as an interim measure for border security as reported in its annual performance reports. In fiscal year 2011, DHS reported data meeting its goal to secure the land border with a decrease in apprehensions. In addition to collecting data on apprehensions, Border Patrol collects and analyzes various data on the number and types of entrants who illegally cross the southwest border between the ports of entry, including collecting estimates on the total number of identified—or “known”—illegal entries. Border Patrol’s estimate of known illegal entries includes illegal, deportable entrants who were apprehended, in addition to the number of entrants who illegally crossed the border but were not apprehended because they crossed back into Mexico (referred to as turn backs) or continued traveling into the U.S. interior (referred to as got aways). Border Patrol collects these data as an indicator of the potential border threat across locations. Border Patrol data show that apprehensions within each southwest Border Patrol sector decreased from fiscal years 2006 to 2011, generally mirroring the decrease in estimated known illegal entries within each sector. In the Tucson sector, for example, our analysis of Border Patrol data showed that apprehensions decreased by 68 percent from fiscal years 2006 to 2011, compared with a 69 percent decrease in estimated known illegal entries, as shown in figure 1. Border Patrol officials attributed the decrease in apprehensions and estimated known illegal entries from fiscal years 2006 through 2011 within southwest border sectors to multiple factors, including changes in the U.S. economy and successful achievement of its strategic objectives. Border Patrol’s ability to address objectives laid out in the 2004 Strategy was strengthened by increases in personnel and technology, and infrastructure enhancements, according to Border Patrol officials. For example, Tucson sector Border Patrol officials said that the sector increased manpower over the past 5 years through an increase in Border Patrol agents that was augmented by National Guard personnel, and that CBP’s Secure Border Initiative (SBI) provided border fencing and other infrastructure, as well as technology enhancements. Border Patrol officials also attributed decreases in estimated known illegal entries and apprehensions to the deterrence effect of CBP consequence programs— programs intended to deter repeated illegal border crossings by ensuring the most efficient consequence or penalty for individuals who illegally enter the United States. Data reported by Border Patrol following the issuance of our December 2012 report show that total apprehensions across the southwest border increased from over 327,000 in fiscal year 2011 to about 357,000 in fiscal year 2012. It is too early to assess whether this increase indicates a change in the trend for Border Patrol apprehensions across the southwest border. Border Patrol collects other types of data that are used by sector management to help inform assessment of its efforts to secure the border against the threats of illegal migration, smuggling of drugs and other contraband, and terrorism. These data show changes, for example, in the (1) percentage of estimated known illegal entrants who are apprehended, (2) percentage of estimated known illegal entrants who are apprehended more than once (repeat offenders), and (3) number of seizures of drugs and other contraband. Border Patrol officials at sectors we visited, and our review of fiscal years 2010 and 2012 sector operational assessments, indicated that sectors have historically used these types of data to inform tactical deployment of personnel and technology to address cross-border threats; however, the agency has not analyzed these data at the national level to inform strategic decision making, according to Border Patrol headquarters officials. These officials stated that greater use of these data in assessing border security at the national level may occur as the agency transitions to the new strategic plan. Apprehensions compared with estimated known illegal entries. Our analysis of Border Patrol data showed that the percentage of estimated known illegal entrants who were apprehended by the Border Patrol over the past 5 fiscal years varied across southwest border sectors. The Tucson sector, for example, showed little change in the percentage of estimated known illegal entrants who were apprehended by Border Patrol over the past 5 fiscal years. Specifically, our analysis showed that of the total number of estimated known aliens who illegally crossed the Tucson sector border from Mexico each year, Border Patrol apprehended 62 percent in fiscal year 2006 compared with 64 percent in fiscal year 2011, an increase of about 2 percentage points. Border Patrol headquarters officials said that the percentage of estimated known illegal entrants who are apprehended is primarily used to determine the effectiveness of border security operations at the tactical—or zone—level but can also affect strategic decision making. The data are also used to inform overall situational awareness at the border, which directly supports field planning and redeployment of resources. Repeat offenders. Changes in the percentage of persons apprehended who have repeatedly crossed the border illegally (referred to as the recidivism rate) is a factor that Border Patrol considers in assessing its ability to deter individuals from attempting to illegally cross the border. Our analysis of Border Patrol apprehension data showed that the recidivism rate has declined across the southwest border by about 6 percentage points from fiscal years 2008 to 2011 in regard to the number of apprehended aliens who had repeatedly crossed the border in the prior 3 years. Specifically, our analysis showed that the recidivism rate across the overall southwest border was about 42 percent in fiscal year 2008 compared with about 36 percent in fiscal year 2011. The Tucson sector had the third-highest recidivism rate across the southwest border in fiscal year 2011, while the highest rate of recidivism occurred in El Centro sector, as shown in figure 2. According to Border Patrol headquarters officials, the agency has implemented various initiatives designed to address recidivism through increased prosecution of individuals apprehended for crossing the border illegally. Seizures of drugs and other contraband. Border Patrol headquarters officials said that data regarding seizures of drugs and other contraband are good indicators of the effectiveness of targeted enforcement operations, and are used to identify trends in the smuggling threat and as indicators of overall cross-border illegal activity, in addition to potential gaps in border coverage, risk, and enforcement operations. However, these officials stated that these data are not used as a performance measure for overall border security because while the agency has a mission to secure the border against the smuggling threat, most smuggling is related to illegal drugs, and that drug smuggling is the primary responsibility of other federal agencies, such as the Drug Enforcement Administration and U.S. Immigration and Customs Enforcement, Homeland Security Investigations. Our analysis of Border Patrol data indicated that across southwest border sectors, seizures of drugs and other contraband increased 83 percent from fiscal years 2006 to 2011, with drug seizures accounting for the vast majority of all contraband seizures. Specifically, the number of drug and contraband seizures increased from 10,321 in fiscal year 2006 to 18,898 in fiscal year 2011. Most seizures of drugs and other contraband occurred in the Tucson sector, with about 28 percent, or 5,299, of the 18,898 southwest border seizures occurring in the sector in fiscal year 2011 as shown in figure 3. Data reported by Border Patrol following the issuance of our December 2012 report show that seizures of drugs and other contraband across the southwest border decreased from 18,898 in fiscal year 2011 to 17,891 in fiscal year 2012. It is too early to assess whether this decrease indicates a change in the trend for Border Patrol seizures across the southwest border. Southwest border sectors scheduled most agent workdays for enforcement activities during fiscal years 2006 to 2011, and the activity related to patrolling the border accounted for a greater proportion of enforcement activity workdays than any of the other activities. Sectors schedule agent workdays across various activities categorized as enforcement or nonenforcement. Across enforcement activities, our analysis of Border Patrol data showed that all sectors scheduled more agent workdays for “patrolling the border”—activities defined to occur within 25 miles of the border—than any other enforcement activity, as shown in figure 4. Border Patrol duties under this activity include patrolling by vehicle, horse, and bike; patrolling with canines; performing sign cutting; and performing special activities such as mobile search and rescue. Other enforcement activities to which Border Patrol scheduled agent workdays included conducting checkpoint duties, developing intelligence, and performing aircraft operations. Border Patrol sectors and stations track changes in their overall effectiveness as a tool to determine if the appropriate mix and placement of personnel and assets are being deployed and used effectively and efficiently, according to officials from Border Patrol headquarters. Border Patrol calculates an overall effectiveness rate using a formula in which it adds the number of apprehensions and turn backs in a specific sector and divides this total by the total estimated known illegal entries— determined by adding the number of apprehensions, turn backs, and got aways for the sector. Border Patrol sectors and stations report this overall effectiveness rate to headquarters. Border Patrol views its border security efforts as increasing in effectiveness if the number of turn backs as a percentage of estimated known illegal entries has increased and the number of got aways as a percentage of estimated known illegal entries has decreased. Border Patrol data showed that the effectiveness rate for eight of the nine sectors on the southwest border increased from fiscal years 2006 through 2011. For example, our analysis of Tucson sector apprehension, turn back, and got away data from fiscal years 2006 through 2011 showed that while Tucson sector apprehensions remained fairly constant at about 60 percent of estimated known illegal entries, the percentage of reported turn backs increased from about 5 percent to about 23 percent, while the percentage of reported got aways decreased from about 33 percent to about 13 percent, as shown in figure 5. As a result of these changes in the mix of turn backs and got aways, Border Patrol data showed that enforcement effort, or the overall effectiveness rate for Tucson sector, improved 20 percentage points from fiscal year 2006 to fiscal year 2011, from 67 percent to 87 percent. Border Patrol headquarters officials said that differences in how sectors define, collect, and report turn back and got away data used to calculate the overall effectiveness rate preclude comparing performance results across sectors. Border Patrol headquarters officials stated that until recently, each Border Patrol sector decided how it would collect and report turn back and got away data, and as a result, practices for collecting and reporting the data varied across sectors and stations based on differences in agent experience and judgment, resources, and terrain. In terms of defining and reporting turn back data, for example, Border Patrol headquarters officials said that a turn back was to be recorded only if it is perceived to be an “intended entry”—that is, the reporting agent believed the entrant intended to stay in the United States, but Border Patrol activities caused the individual to return to Mexico. According to Border Patrol officials, it can be difficult to tell if an illegal crossing should be recorded as a turn back, and sectors have different procedures for reporting and classifying incidents. In terms of collecting data, Border Patrol officials reported that sectors rely on a different mix of cameras, sign cutting, credible sources, and visual observation to identify and report the number of turn backs and got aways. According to Border Patrol officials, the ability to obtain accurate or consistent data using these identification sources depends on various factors, such as terrain and weather. For example, data on turn backs and got aways may be understated in areas with rugged mountains and steep canyons that can hinder detection of illegal entries. In other cases, data may be overstated—for example, in cases where the same turn back identified by a camera is also identified by sign cutting. Double counting may also occur when agents in one zone record as a got away an individual who is apprehended and then reported as an apprehension in another zone. As a result of these data limitations, Border Patrol headquarters officials said that while they consider turn back and got away data sufficiently reliable to assess each sector’s progress toward border security and to inform sector decisions regarding resource deployment, they do not consider the data sufficiently reliable to compare—or externally report—results across sectors. Border Patrol headquarters officials issued guidance in September 2012 to provide a more consistent, standardized approach for the collection and reporting of turn back and got away data by Border Patrol sectors. Each sector is to be individually responsible for monitoring adherence to the guidance. According to Border Patrol officials, it is expected that once the guidance is implemented, data reliability will improve. This new guidance may allow for comparison of sector performance and inform decisions regarding resource deployment for securing the southwest border. Border Patrol officials stated that the agency is in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between ports of entry and for informing the identification and allocation of resources needed to secure the border, but has not identified milestones and time frames for developing and implementing them. Since fiscal year 2011, DHS has used the number of apprehensions on the southwest border between ports of entry as an interim performance goal and measure for border security as reported in its annual performance report. Prior to this, DHS used operational control as its goal and outcome measure for border security and to assess resource needs to accomplish this goal. As we previously testified, at the end of fiscal year 2010, Border Patrol reported achieving varying levels of operational control of 873 (44 percent) of the nearly 2,000 southwest border miles. For example, Yuma sector reported achieving operational control for all of its border miles. In contrast, the other southwest border sectors reported achieving operational control ranging from 11 to 86 percent of their border miles, as shown in figure 6. Border Patrol officials attributed the uneven progress across sectors to multiple factors, including terrain, transportation infrastructure on both sides of the border, and a need to prioritize resource deployment to sectors deemed to have greater risk of illegal activity. DHS transitioned from using operational control as its goal and outcome measure for border security in its Fiscal Year 2010-2012 Annual Performance Report. Citing a need to establish a new border security goal and measure that reflect a more quantitative methodology as well as the department’s evolving vision for border control, DHS established the interim performance goal and measure of the number of apprehensions between the land border ports of entry until a new border control goal and measure could be developed. We previously testified that the interim goal and measure of number of apprehensions on the southwest border between ports of entry provides information on activity levels, but it does not inform program results or resource identification and allocation decisions, and therefore until new goals and measures are developed, DHS and Congress could experience reduced oversight and DHS accountability. Further, studies commissioned by CBP have documented that the number of apprehensions bears little relationship to effectiveness because agency officials do not compare these numbers with the amount of cross-border illegal activity. Border Patrol officials stated that the agency is in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between ports of entry and for informing the identification and allocation of resources needed to secure the border, but has not identified milestones and time frames for developing and implementing them. According to Border Patrol officials, establishing milestones and time frames for the development of performance goals and measures is contingent on the development of key elements of the 2012-2016 Strategic Plan, such as a risk assessment tool, and the agency’s time frames for implementing these key elements—targeted for fiscal years 2013 and 2014—are subject to change. Specifically, under the 2012-2016 Strategic Plan, the Border Patrol plans to continuously evaluate border security—and resource needs—by comparing changes in risk levels against available resources across border locations. Border Patrol officials stated the agency is in the process of identifying performance goals and measures that can be linked to these new risk assessment tools that will show progress and status in securing the border between ports of entry, and determine needed resources, but has not established milestones and time frames for developing and implementing goals and measures because the agency’s time frames for implementing key elements of the plan are subject to change. Standard practices in program management call for documenting the scope of a project as well as milestones and time frames for timely completion and implementation to ensure results are achieved. These standard practices also call for project planning—such as identifying time frames—to be performed in the early phases of a program and recognize that plans may need to be adjusted along the way in response to unexpected circumstances. Time frames for implementing key elements of the 2012-2016 Strategic Plan can change; however, milestones and time frames for the development of performance goals and measures could help ensure that goals and measures are completed in a timely manner. To support the implementation of Border Patrol’s 2012-2016 Strategic Plan and identify the resources needed to achieve the nation’s strategic goal for securing the border, we recommended in our December 2012 report that Border Patrol establish milestones and time frames for developing a (1) performance goal, or goals, for border security between the ports of entry that defines how border security is to be measured and (2) performance measure, or measures—linked to a performance goal or goals—for assessing progress made in securing the border between ports of entry and informing resource identification and allocation efforts. DHS agreed with these recommendations and stated that it plans to establish milestones and time frames for developing goals and measures by November 30, 2013. Milestones and time frames could better position CBP to monitor progress in developing and implementing goals and measures, which would provide DHS and Congress with information on the results of CBP efforts to secure the border between ports of entry and the extent to which existing resources and capabilities are appropriate and sufficient. Chairwoman Miller, Ranking Member Jackson Lee, and members of the subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Lacinda Ayers (Assistant Director), Frances A. Cook, Barbara A. Guffy, Stanley J. Kostyla, Brian J. Lipman, Jerome T. Sandau, and Ashley D. Vaughan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Within DHS, U.S. Customs and Border Protections (CBP) Border Patrol has primary responsibility for securing the southwest border between ports of entry. CBP reported apprehending over 327,000 illegal entrants and making over 17,150 seizures of drugs along the border in fiscal year 2011. Across the border, most apprehensions (over 38 percent) and drug seizures (28 percent) occurred in the Tucson sector. This statement discusses (1) apprehension and other data CBP collects to inform changes in southwest border security and data used to show effectiveness of resource deployments, and (2) the extent to which Border Patrol has developed goals and measures to identify resource needs under its new strategic plan. This statement is based on GAOs December 2012 report on CBPs management of southwest border resources and prior reports on DHSs efforts to measure border security, with selected updates from February 2013 on Border Patrol fiscal year 2012 operations data. To conduct prior work, GAO analyzed DHS documents and data from fiscal years 2006 to 2011, and interviewed CBP officials, among other things. To conduct selected updates, GAO reviewed Border Patrol data and interviewed Border Patrol officials. Since fiscal year 2011, the Department of Homeland Security (DHS) has used changes in the number of apprehensions on the southwest border between ports of entry as an interim measure for border security as reported in its annual performance plans. In fiscal year 2011, DHS reported a decrease in apprehensions, which met its goal to secure the southwest border. Our analysis of Border Patrol data showed that apprehensions decreased within each southwest border sector from fiscal years 2006 to 2011, generally mirroring decreases in estimated known illegal entries. Border Patrol attributed these decreases in part to changes in the U.S. economy and improved enforcement efforts. In addition to apprehension data, sector management collect and use other data to assess enforcement efforts within sectors. Our analysis of these data show that the percentage of estimated known illegal entrants apprehended from fiscal years 2006 to 2011 varied across southwest border sectors; in the Tucson sector, for example, there was little change in the percentage of estimated known illegal entrants apprehended over this time period. The percentage of individuals apprehended who repeatedly crossed the border illegally declined across the border by 6 percent from fiscal years 2008 to 2011. Further, the number of seizures of drugs and other contraband across the border increased from 10,321 in fiscal year 2006 to 18,898 in fiscal year 2011. Additionally, southwest border sectors scheduled more agent workdays in fiscal year 2011 to enforcement activities for patrolling the border than for any other enforcement activity. The Tucson sector, for example, scheduled 73 percent of workdays for enforcement activities; of these, 71 percent were scheduled for patrolling within 25 miles of the border. Other sectors scheduled from 44 to 70 percent of enforcement workdays for patrolling the border. Sectors assess how effectively they use resources to secure the border, but differences in how they collect and report data preclude comparing results. Border Patrol issued guidance in September 2012 to improve the consistency of sector data collection and reporting, which may allow comparison of performance in the future. Border Patrol is developing performance goals and measures to define border security and the resources needed to achieve it, but has not identified milestones and time frames for developing and implementing goals and measures under its new strategic plan. Prior to fiscal year 2011, DHS used operational control---the number of border miles where Border Patrol had the capability to detect, respond to, and interdict cross-border illegal activity--as its goal and measure for border security and to assess resource needs to accomplish this goal. At the end of fiscal year 2010, DHS reported achieving varying levels of operational control of 873 (44 percent) of the nearly 2,000 southwest border miles. In fiscal year 2011, citing a need to establish new goals and measures that reflect a more quantitative methodology and an evolving vision for border control, DHS transitioned to using the number of apprehensions on the southwest border as an interim goal and measure. As GAO previously testified, this interim measure, which reports on program activity levels and not program results, limits DHS and congressional oversight and accountability. Milestones and time frames could assist Border Patrol in monitoring progress in developing goals and measures necessary to assess the status of border security and the extent to which existing resources and capabilities are appropriate and sufficient. In a December 2012 report, GAO recommended that CBP ensure Border Patrol develops milestones and time frames for developing border security goals and measures to assess progress made and inform resource needs. DHS concurred with these recommendations and plans to address them. |
Since the department’s creation in 2003, we have designated the implementation and transformation of DHS as high risk because DHS had to combine 22 agencies—several with major management challenges— into one department, and failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. This high-risk area includes (1) challenges in strengthening DHS’s management functions—financial management, human capital, information technology (IT), and acquisition management—(2) the effect of those challenges on DHS’s mission implementation, and (3) challenges in integrating management functions within and across the department and its components. On the basis of our prior work, in September 2010 we identified and provided to DHS 31 actions and outcomes that are critical to addressing the challenges within the department’s management areas and in integrating those functions across the department. These key actions and outcomes include, among others, validating required acquisition documents in accordance with a department-approved, knowledge-based acquisition process. The Aviation and Transportation Security Act (ATSA) established TSA as the federal agency with primary responsibility for securing the nation’s civil aviation system, which includes the screening of all passengers and property transported from and within the United States by commercial passenger aircraft. In accordance with ATSA, all passengers, their accessible property, and their checked baggage are screened pursuant to TSA-established procedures at more than 450 airports presently regulated for security by TSA. These procedures generally provide, among other things, that passengers pass through security checkpoints where they and their identification documents, and accessible property, are checked by transportation security officers (TSO), other TSA employees, or by private-sector screeners under TSA’s Screening Partnership Program. TSA relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers include TSOs responsible for screening passengers and their carry-on baggage at passenger checkpoints, using technologies that include x-ray equipment, magnetometers, and Advanced Imaging Technology (AIT), among others. In response to the December 2009 attempted terrorist attack, TSA revised its procurement and deployment strategy for AIT, commonly referred to as full-body scanners, increasing the number of AIT units it planned to procure and deploy. TSA stated that AIT provides enhanced security benefits compared with walk-through metal detectors, such as enhanced detection capabilities for identifying nonmetallic threat objects and liquids. AIT produces an image of a passenger’s body that a screener interprets. The image identifies objects, or anomalies, on the outside of the physical body but does not reveal items beneath the surface of the skin, such as implants. As of May 2012, TSA has deployed more than 670 AIT units to approximately 170 airports and reported that it plans to deploy a total of about 1,250 AIT units. In January 2012, we issued a classified report on TSA’s procurement and deployment of AIT that addressed the extent to which (1) TSA followed DHS acquisition guidance when procuring AIT and (2) deployed AIT units are effective at detecting threats. Another layer of security is checked-baggage screening, which uses technology referred to as explosive detection systems (EDS) and explosives trace detection (ETD). Our past work has found that technology program performance cannot be accurately assessed without valid baseline requirements established at the program start. Without the development, review, and approval of key acquisition documents, such as the mission need statement and operational requirements document, agencies are at risk of having poorly defined requirements that can negatively affect program performance and contribute to increased costs. For example, in June 2010, we reported that more than half of 15 DHS programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, or establishing acquisition program baselines.currently have ongoing work related to this area and we plan to report the results later this year. We made a number of recommendations to help address issues related to these procurements as discussed below. DHS has generally agreed with these recommendations and, to varying degrees, has taken actions to address them. In addition, our past work has found that TSA faces challenges in identifying and meeting program requirements in some of its aviation security programs. For example: We reported in January 2012 that TSA did not fully follow DHS acquisition policies when acquiring AIT, which resulted in DHS approving full AIT deployment without full knowledge of TSA’s revised specifications. Specifically, DHS’s Acquisition Directive 102 required TSA to notify DHS’s Acquisition Review Board (ARB) if AIT could not meet any of TSA’s five key performance parameters (KPP) or if TSA changed a KPP during qualification testing. Senior TSA officials acknowledged that TSA did not comply with the directive’s requirements, but stated that TSA still reached a “good decision” in procuring AIT and that the ARB was fully informed of the program’s changes to its KPPs. Further, TSA officials stated that the program was not bound by the directive because it was a new acquisition process and they believed that the ARB was not fully functioning at the time. DHS officials stated that the ARB discussed the changed KPP but did not see the documents related to the change and determined that TSA must update the program’s key acquisition document, the Acquisition Program Baseline, before TSA could deploy AIT units. However, we reported that, according to a February 2010 acquisition decision memorandum from DHS, the ARB approved TSA for full-scale production without reviewing the changed KPP. DHS officials stated that the ARB should have formally reviewed changes made to the KPP to ensure that TSA did not change it arbitrarily. According to TSA, it should have submitted its revised requirements for approval, but it did not because there was confusion as to whether DHS should be informed of all changes. We had previously reported that programs procuring new technologies with fluctuating requirements will have a difficult time ensuring that the acquisition is meeting program needs. DHS acquisition oversight officials agreed that changing key requirements is not a best practice for system acquisitions already under way. As a result, we found that TSA procured and deployed a technology that met evolving requirements, but not the initial requirements included in its key acquisition requirements document that the agency initially determined were necessary to enhance the aviation system. We recommended that TSA should develop a roadmap that outlines vendors’ progress in meeting all KPPs. DHS agreed with our recommendation. In July 2011, we reported that TSA revised its EDS requirements to better address current threats, and plans to implement these requirements in a phased approach. However, we reported that some number of EDS machines in TSA’s checked baggage screening fleet are configured to detect explosives at the levels established in the 2005 requirements. The remaining EDS machines are configured to detect explosives at 1998 levels. When TSA established the 2005 requirements, it did not have a plan with the appropriate time frames needed to deploy EDSs to meet the requirements. To help ensure that TSA’s checked baggage screening machines are operating most effectively, we recommended that TSA develop a plan to deploy EDSs to meet the most recent explosive-detection requirements and ensure that the new machines, as well as machines deployed in airports, are operated at the levels in established requirements. DHS concurred with our recommendation and has begun taking action to address it; for example, DHS reported that TSA has developed a plan to evaluate its current fleet of EDSs to determine the extent to which they comply with these requirements. However, our recommendation is intended to ensure that TSA operate all EDSs at airports at the most recent requirements. Until TSA develops a plan identifying how it will approach the upgrades for currently deployed EDSs—and the plan includes such items as estimated costs and the number of machines that can be upgraded—it will be difficult for TSA to provide reasonable assurance that its upgrade approach is feasible or cost effective. Our prior work has also shown that not resolving problems discovered during testing can sometimes lead to costly redesign and rework at a later date. Addressing such problems before moving to the acquisition phase can help agencies better manage costs. Specifically: In January 2012, we reported that TSA began deploying AIT before it received approval for how it would test AIT. For example, DHS’s Acquisition Directive 102 required DHS to approve testing and evaluation master plans—the documents that ensure that programs are tested appropriately—prior to testing. However, we found that DHS did not approve TSA’s testing and evaluation master plan until January 2010, after TSA had completed qualification and operational tests and DHS had already approved TSA for full AIT deployment. According to DHS, the DHS Director of Operational Testing and Evaluation assessed the testing of AIT prior to the September 2009 ARB meeting and recommended approving the decision to procure AIT at that meeting, even though the ARB did not approve its testing plans. Additionally, we reported that DHS approved TSA’s AIT deployment in September 2009, on the basis of laboratory-based qualification testing results and initial field-based operational testing results that were not completed until later that year. According to DHS officials, the department initially had challenges providing effective oversight to projects already engaged in procurement when the directive was issued. For example, they noted that TSA had begun conducting qualification testing in 2009, but DHS’s first AIT oversight meeting under the new directive was not until later that year. As a result, we reported that TSA procured AIT without DHS’s full oversight and approval or knowledge of how TSA would test and evaluate AIT. In July 2011, we reported that TSA revised the explosive detection requirements for EDS checked baggage screening machines in 2005 though it did not begin operating EDS systems to meet these 2005 requirements until 2009. We also reported that TSA made additional revisions to the EDS requirements in January 2010 but experienced challenges in collecting explosives data on the physical and chemical properties of certain explosives needed by vendors to develop EDS detection software to meet the 2010 requirements.also needed by TSA for testing the machines to determine whether they meet established requirements prior to their procurement and deployment to airports. TSA and S&T have experienced these challenges because of problems associated with safely handling and consistently formulating some explosives, which have also resulted in problems carrying out the EDS procurement as planned. Further, TSA deployed a number of EDSs that had the software necessary to meet the 2005 requirements, but because testing to compare false-alarm rates had not been completed, the software was not activated, subsequently; these EDSs were detecting explosives at levels established in 1998. According to TSA officials, once completed, the results of this testing to compare false alarm rates would allow them to determine if additional staff are needed at airports to help resolve false alarms once the EDSs are configured to operate at a certain level of requirements. TSA officials told us that they planned to perform this testing as a part of the ongoing EDS acquisition. We recommended that TSA develop a plan to ensure that TSA has the explosives data needed for each of the planned phases of the 2010 EDS requirements before starting the procurement process for new EDSs or upgrades included in each applicable phase. DHS stated that TSA modified its strategy for the EDS’s competitive procurement in July 2010 in response to the challenges in working with the explosives for data collection by removing the data collection from the procurement process. TSA’s plan to separate the data collection from the procurement process is a positive step, but to fully address our recommendation, a plan is needed to establish a process for ensuring that data are available before starting the procurement process for new EDSs or upgrades for each applicable phase. In June 2011 we reported that S&T’s Test & Evaluation and Standards Office, responsible for overseeing test and evaluation of DHS’s major acquisition programs, reviewed or approved test and evaluation documents and plans for programs undergoing testing, and conducted independent assessments for the programs that completed operational testing. DHS senior-level officials considered the office’s assessments and input in deciding whether programs were ready to proceed to the next acquisition phase. However, the office did not consistently document its review and approval of components’ test agents—a government entity or independent contractor carrying out independent operational testing for a major acquisition. We recommended, among other things, that S&T develop mechanisms to document its review of component acquisition documentation. DHS concurred and reported actions underway to address them. In October 2009, we reported that TSA deployed explosives trace portals, a technology for detecting traces of explosives on passengers at airport checkpoints, in January 2006 even though TSA officials were aware that tests conducted during 2004 and 2005 on earlier models of the portals suggested the portals did not demonstrate reliable performance in an airport environment. In June 2006, TSA halted deployment of the explosives trace portals because of performance problems and high installation costs. In our 2009 report, we recommended that, to the extent feasible, TSA ensure that tests are completed before deploying new checkpoint screening technologies to airports. DHS concurred with the recommendation and has taken action to address it, such as requiring more-recent technologies to complete both laboratory and operational tests prior to deployment. We have found that realistic acquisition program baselines with stable requirements for cost, schedule, and performance are among the factors that are important to successful acquisitions delivering capabilities within cost and schedule. Our prior work has found that program performance metrics for cost and schedule can provide useful indicators of the health of acquisition programs and, when assessed regularly for changes and the reasons that cause changes, such indicators can be valuable tools for improving insight and oversight of individual programs as well as the total portfolio of major acquisitions.be accurately assessed without valid baseline requirements established Importantly, program performance cannot at the program start, particularly those that establish the minimum acceptable threshold required to satisfy user needs. According to DHS’s acquisition guidance, the program baseline is the contract between the program and departmental oversight officials and must be established at program start to document the program’s expected cost, deployment schedule, and technical performance. Establishing such a baseline at program start is important for defining the program’s scope, assessing whether all life-cycle costs are properly calculated, and measuring how well the program is meeting its goals. By tracking and measuring actual program performance against this baseline, management can be alerted to potential problems, such as cost growth or changing requirements, and has the ability to take early corrective action. We reported in Aril 2012 that TSA has not had a DHS-approved acquisition program baseline since the inception of the EBSP program more than 8 years ago. Further, DHS did not require TSA to complete an acquisition program baseline until November 2008. According to TSA officials, they have twice submitted an acquisition program baseline to DHS for approval—first in November 2009 and again February 2011. An approved baseline will provide DHS with additional assurances that TSA’s approach is appropriate and that the capabilities being pursued are worth the expected costs. In November 2011, because TSA did not have a fully developed life-cycle cost estimate as part of its acquisition program baseline, DHS instructed TSA to revise the life cycle cost estimates as well as its procurement and deployment schedules to reflect budget constraints. DHS officials told us that they could not approve the acquisition program baseline as written because TSA’s estimates were significantly over budget. TSA officials stated that TSA is currently working with DHS to amend the draft program baseline and plans to resubmit the revised acquisition program baseline before the next Acquisition Review Board meeting, which is currently planned for July 2012. Establishing and approving a program baseline, as DHS and TSA currently plan to do for the EBSP, could help DHS assess the program’s progress in meeting its goals and achieve better program outcomes. In our 2010 report of selected DHS acquisitions, 12 of 15 selected DHS programs we reviewed exhibited schedule delays and cost growth beyond initial estimates. We noted that DHS acquisition oversight officials have raised concerns about the accuracy of cost estimates for most major programs, making it difficult to assess the significance of the cost growth we identified. Leading practices state that the success of a large-scale system acquisition, such as the TSA’s EDS acquisition, depends in part on having a reliable schedule that identifies: (1) when the program’s set of work activities and milestone events will occur, (2) how long they will take, and (3) how they are related to one another. Leading practices also call for the schedule to expressly identify and define the relationships and dependencies among work elements and the constraints affecting the start and completion of work elements. Additionally, best practices indicate that a well-defined schedule also helps to identify the amount of human capital and fiscal resources that are needed to execute an acquisition. We reported in January 2012 that TSA did not have plans to require vendors to meet milestones used during the AIT acquisition. We recommended that TSA should develop a roadmap that outlines vendors’ progress in meeting all KPPs because it is important that TSA convey vendors’ progress in meeting those requirements and full costs of the technology to decision makers when making deployment and funding decisions. TSA reported that it hoped vendors would be able to gradually improve meeting KPPs for AIT over time. We reported that TSA would have more assurance that limited taxpayer resources are used effectively by developing a roadmap that specifies development milestones for the technology and having DHS acquisition officials approve this roadmap. DHS agreed with our recommendation. In July 2011, we reported that TSA had established a schedule for the acquisition of EDS machines but it did not fully comply with leading practices, and TSA had not developed a plan to upgrade its EDS fleet to meet the current explosives detection requirements. These leading practices state that the success of a large-scale system acquisition, such as TSA’s EDS acquisition, depends in part on having a reliable schedule that identifies when the program’s set of work activities and milestone events will occur, amongst other things. For example, the schedule for the EDS acquisition is not reliable because it does not reflect all planned program activities and does not include a timeline to deploy EDSs or plans to procure EDSs to meet subsequent phases of explosive detection requirements. We stated that developing a reliable schedule would help TSA better monitor and oversee the progress of the EDS acquisition. DHS concurred with our recommendation to develop and maintain a schedule for the entire EBSP in accordance with the leading practices identified by GAO for preparing a schedule. DHS commented that TSA had already begun working with key stakeholders to develop and define requirements for a schedule and to ensure that the schedule aligns with the best practices outlined by GAO. In April 2012, we reported that TSA’s methods for developing life cycle cost estimates for the EBSP did not fully adhere to best practices for developing these estimates. As highlighted in our past work, a high-quality, reliable cost estimation process provides a sound basis for making accurate and well-informed decisions about resource investments, budgets, assessments of progress, and accountability for results and thus is critical to the success of a program. We reported that TSA’s estimates partially met three characteristics and minimally met one characteristic of a reliable cost estimate.concurred with our recommendation that TSA ensure that its life cycle cost estimates conform to cost estimating best practices, and identified efforts underway to address it. DHS also acknowledged the importance of producing life cycle cost estimates that are comprehensive, well documented, accurate, and credible so that they can be used to support DHS funding and budget decisions. In part due to the problems we have highlighted in DHS’s acquisition process, the implementation and transformation of DHS remains on our high-risk list. DHS currently has several plans and efforts underway to address the high-risk designation as well as the more specific challenges related to acquisition and program implementation that we have previously identified. For example, DHS initially described an initiative in the January 2011 version of its Integrated Strategy for High Risk Management to establish a framework, the Integrated Investment Life Cycle Model (IILCM), for managing investments across its components and management functions; strengthening integration within and across those functions; and ensuring mission needs drive investment decisions. The department seeks to use the IILCM to enhance resource decision making and oversight by creating new department-level councils to identify priorities and capability gaps, revising how DHS components and lines of business manage acquisition programs, and developing a common framework for monitoring and assessing implementation of investment decisions. We reported in March 2012 that, from the time DHS first reported on the IILCM initiative in January 2011 to its December 2011 revision of its high-risk strategy, the initiative had made little progress though DHS plans to begin using the IILCM by the end of September 2012. In October 2011, to enhance the department’s ability to oversee major acquisition programs, DHS realigned the acquisition management functions previously performed by two divisions within the Office of Chief Procurement Officer to establish the Office of Program Accountability and Risk Management (PARM). PARM, which is responsible for program governance and acquisition policy, serves as the Management Directorate’s executive office for program execution and works with DHS leadership to assess the health of major acquisitions and investments. To help with this effort, PARM is developing a database, known as the Decision Support Tool, intended to improve the flow of information from component program offices to the Management Directorate to support its governance efforts. DHS reported in its December 2011 Integrated Strategy for High Risk Management that senior executives are not confident enough in the data to use the Decision Support Tool developed by PARM to help make acquisition decisions. However, DHS’s plans to improve the quality of the data in this database are limited. At this time, PARM only plans to check the data quality in preparation for key milestone meetings in the acquisition process. This could significantly diminish the Decision Support Tool’s value because users cannot confidently identify and take action to address problems meeting cost or schedule goals prior to program review meetings. We reported in March 2012 that DHS has made progress strengthening its management functions, but the department faces considerable challenges. Specifically, DHS has faced challenges overseeing the management, testing, acquisition, and deployment of various technology programs including AIT and EDS. Going forward, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes including those related to acquisition management. DHS reported that it plans to revise its Integrated Strategy for High Risk Management in June 2012, which includes management initiatives and corrective actions to address acquisition management challenges, among other management areas. We will continue to monitor and assess DHS’s implementation and transformation efforts through our ongoing and planned work, including the 2013 high-risk update that we expect to issue in early 2013. Chairmen Issa and Mica, Ranking Members Cummings and Rahall, and members of the committees, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact Steve Lord at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Dave Bruno, Assistant Director; Scott Behen, Analyst-in-Charge; Emily Gunn, and Katherine Trimble. Other contributors include: David Alexander, Tom Lombardi, Jason Lee, Linda Miller, and Jerry Seigler. Key contributors for the previous work that this testimony is based on are listed within each individual product. Checked Baggage Screening: TSA Has Deployed Optimal Systems at the Majority of TSA-Regulated Airports, but Could Strengthen Cost Estimates. GAO-12-266. Washington D.C.: April 27, 2012. Transportation Security Administration: Progress and Challenges Faced in Strengthening Three Key Security Programs. GAO-12-541T. Washington D.C.: March 26, 2012 Aviation Security: TSA Has Made Progress, but Additional Efforts Are Needed to Improve Security. GAO-11-938T. Washington, D.C.: September 16, 2011. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington, D.C.: September 7, 2011. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Aviation Security: TSA Has Taken Actions to Improve Security, but Additional Efforts Remain. GAO-11-807T. Washington, D.C.: July 13, 2011. Aviation Security: TSA Has Enhanced Its Explosives Detection Requirements for Checked Baggage, but Additional Screening Actions Are Needed. GAO-11-740. Washington, D.C.: July 11, 2011. Homeland Security: Improvements in Managing Research and Development Could Help Reduce Inefficiencies and Costs. GAO-11-464T. Washington D.C.: March 15, 2011. High-Risk Series: An Update. GAO-11-278. Washington D.C.: February 16, 2011. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Aviation Security: Progress Made but Actions Needed to Address Challenges in Meeting the Air Cargo Screening Mandate. GAO-10-880T. Washington, D.C.: June 30, 2010. Aviation Security: TSA Is Increasing Procurement and Deployment of Advanced Imaging Technology, but Challenges to This Effort and Other Areas of Aviation Security Remain. GAO-10-484T. Washington, D.C.: March 17, 2010. Aviation Security: DHS and TSA Have Researched, Developed, and Begun Deploying Passenger Checkpoint Screening Technologies, but Continue to Face Challenges. GAO-10-128. Washington, D.C.: October 7, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Within DHS, TSA is responsible for developing and acquiring new technologies to address transportation-related homeland security needs. TSAs acquisition programs represent billions of dollars in life-cycle costs and support a wide range of aviation security missions and investments, including technologies used to screen passengers and checked baggage such as AIT and EDS, among others. GAOs testimony addresses three key DHS and TSA challenges identified in past work: (1) developing and meeting technology program requirements, (2) overseeing and conducting testing of new screening technologies, and (3) identifying acquisition program baselines (or starting points), program schedules, and costs. This statement will also discuss recent DHS and TSA efforts to strengthen TSAs investment and acquisition processes. This statement is based on reports and testimonies GAO issued from October 2009 through April 2012 related to TSAs efforts to manage, test, and deploy various technology programs. GAOs past work has found that the Department of Homeland Security (DHS) and the Transportation Security Administration (TSA) have faced challenges in developing and meeting program requirements when acquiring screening technologies. GAOs past work has demonstrated that program performance cannot be accurately assessed without valid baseline requirements established at the program start. In June 2010, GAO reported that more than half of the 15 DHS programs GAO reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, setting operational requirements, or establishing acquisition program baselines. At the program level, in January 2012, GAO reported that TSA did not fully follow DHS acquisition policies when acquiring advanced imaging technology (AIT)commonly referred to as a full body scanner that identifies objects or anomalies on the outside of the bodywhich resulted in DHS approving full AIT deployment without full knowledge of TSAs revised specifications. In July 2011, GAO reported that in 2010 TSA revised its explosive detection systems (EDS) requirements to better address current threats and planned to implement these requirements in a phased approach; however, GAO reported that some number of the EDSs in TSAs fleet were configured to detect explosives at the levels established in 2005 while the remaining ones were configured to detect explosives at 1998 levels and TSA did not have a plan with time frames needed to deploy EDSs to meet the current requirements. GAO also reported DHS and TSA challenges in overseeing and testing new technologies. For example, in January 2012, GAO reported that TSA began deploying AIT before it received approval for how it would test AIT. Contrary to DHSs acquisition guidance, TSA approved AIT for deployment prior to DHSs approval of the AIT testing and evaluation plan. In July 2011, GAO also reported that TSA experienced challenges collecting data on the properties of certain explosives needed by vendors to develop EDS detection software and needed by TSA before testing EDS prior to procurement and deployment to airports. TSA and the DHS Science and Technology Directorate experienced these challenges because of problems safely handling and consistently formulating some explosives. The challenges related to data collection for certain explosives resulted in problems carrying out the EDS procurement as planned. DHS and TSA have experienced challenges identifying acquisition program baselines, program schedules, and costs. GAOs prior work has found that realistic acquisition program baselines with stable requirements for cost, schedule, and performance are among the factors that are important to successful acquisitions delivering capabilities within cost and schedule. GAO also found that program performance metrics for cost and schedule can provide useful indicators of the health of acquisition programs. In April 2012 GAO reported that TSAs methods for developing life-cycle cost estimates for the Electronic Baggage Screening Program did not fully adhere to best practices for developing these estimates. DHS has efforts underway to strengthen oversight of technology acquisitions. In part due to the problems GAO highlighted in DHSs acquisition process, the implementation and transformation of DHS remains on GAOs high-risk list. GAO is not making any new recommendations. In prior work, GAO made recommendations to address challenges related to deploying AIT, EDS, and other screening technology to meet requirements; overseeing and conducting testing of AIT and EDS technologies; and incorporating information on costs and schedules, among other things, in making technology acquisition decisions. DHS and TSA concurred and have actions underway to address these recommendations. |
DOD is not receiving expected returns on its large investment in weapon systems. While it is committing substantially more investment dollars to develop and procure new weapon systems, our analysis shows that the 2007 portfolio of major defense acquisition programs is experiencing greater cost growth and schedule delays than programs in fiscal years 2000 and 2005. For example, as shown in table 1, total acquisition costs for 2007 programs have increased 26 percent from first estimates, whereas programs in fiscal year 2000 had increased by 6 percent. Total RDT&E costs for programs in 2007 have increased by 40 percent from first estimates, compared to 27 percent for programs in 2000. The story is no better when expressed in unit costs. Based on our analysis for the 2007 portfolio, 44 percent of DOD’s major defense acquisition programs are paying at least 25 percent more per unit than originally expected. The percentage of programs experiencing a 25 percent or more increase in program acquisition unit costs in fiscal year 2000 was 37 percent. The consequence of cost growth is reduced buying power, which can represent significant opportunity costs for DOD. In other words, every dollar spent on inefficiencies in acquiring one weapon system is less money available for other priorities and programs. Total acquisition cost for the current portfolio of major programs under development or in production has grown by nearly $300 billion over initial estimates. As program costs increase, DOD must request more funding to cover the overruns, make trade-offs with existing programs, delay the start of new programs, or take funds from other accounts. Just as importantly, DOD has already missed fielding dates for many programs and many others are behind schedule. Because of program delays, warfighters often have to operate costly legacy systems longer than expected, find alternatives to fill capability gaps, or go without the capability. The warfighter’s urgent need for the new weapon system is often cited when the case is first made for developing and producing the system. However, on average, the current portfolio of programs has experienced a 21-month delay in delivering initial operational capability to the warfighter and, in fact, 14 percent are more than 4 years late. In assessing the 72 weapon programs, we found no evidence of widespread adoption of a knowledge-based acquisition process within DOD despite polices to the contrary. Reconciling this discrepancy between policy and practice is essential for getting better outcomes for DOD programs. The majority of programs in our assessment this year proceeded with lower levels of knowledge at critical junctures and attained key elements of product knowledge later in development than expected under best practices (see fig. 1). This exposes programs to significant and unnecessary technology, design, and production risks, and ultimately leads to cost growth and schedule delays. The building of knowledge over a product’s development is cumulative, as one knowledge point builds on the next, and failure to capture key product knowledge can lead to problems that eventually cascade and become magnified throughout product development and production. Very few of the programs we assessed started system development with evidence that the proposed solution was based on mature technologies and proven design features. As a result, programs are still working to mature technologies during system development and production, which causes significantly higher cost growth than programs that start development with mature technologies. Only 12 percent of the programs in our assessment demonstrated all of their critical technologies as fully mature at the start of system development and they have had much better outcomes than the others. For those programs in our assessment with immature technologies at development start, total RDT&E costs grew by 44 percent more than for programs that began with mature technologies. More often than not, programs were still maturing technologies late into development and even into production. In addition to ensuring that technologies are mature, best practices for product development suggest that the developer should have delivered a preliminary design of the proposed weapon system based on a robust systems engineering process before committing to system development. This process should allow the developer—the contractor responsible for designing the weapon system—to analyze the customer’s expectations for the product and identify gaps between resources and those expectations, which then can be addressed through additional investments, alternate designs, and ultimately trade-offs. Only ten percent of the programs in our assessment had completed their preliminary design review prior to committing to system development. The other 90 percent averaged about 2 1/2 years into system development before the review was completed or planned to be completed. Programs like the Aerial Common Sensor and Joint Strike Fighter did not deliver a sound preliminary design at system development start and discovered problems early in their design activities that required substantial resources be added to the programs or, in the case of Aerial Common Sensor, termination of the system development contract. Knowing that a product’s design is stable before system demonstration reduces the risk of costly design changes occurring during the manufacturing of production representative prototypes—when investments in acquisitions become much more significant. Only a small portion of the programs in our assessment that have held a design review captured the necessary knowledge to ensure that they had mature technologies at system development start and a stable system design before entering the more costly system demonstration phase of development. Over half of the programs in our assessment did not even have mature technologies at the design review (knowledge that actually should have been achieved before system development start). Also, less than one-quarter of the programs that provided data on drawings released at the design review reached the best practices standard of 90 percent. We have found that programs moving forward into system demonstration with low levels of design stability are more likely than other programs to encounter costly design changes and parts shortages that in turn caused labor inefficiencies, schedule delays, and quality problems. Even by the beginning of production, more than a third of the programs that had entered this phase still had not released 90 percent of their engineering drawings. In addition, we found that over 80 percent of the programs providing data did not or did not plan to demonstrate the successful integration of the key subsystems and components needed for the product through an integration laboratory, or better yet, through testing an early system prototype by the design review. For example, the Navy’s E-2D Advanced Hawkeye moved past the design review and entered systems demonstration without fully proving—through the use of an integration lab or prototype—that the design could be successfully integrated. The program did not have all the components operational in a systems integration lab until almost 2 years after the design review. While the program estimated it had released 90 percent of the drawings needed for the system by the design review, as it was conducting system integration activities, it discovered that it needed substantially more drawings. This increase means that the program really had completed only 53 percent of the drawings prior to the review, making it difficult to ensure the design was stable. In addition to lacking mature technologies and design stability, most programs have not or do not plan to capture critical manufacturing and testing knowledge before entering production. This knowledge ensures that the product will work as intended and can be manufactured efficiently to meet cost, schedule, and quality targets. Of the 26 programs in our assessment that have had production decisions, none provided data showing that they had all their critical manufacturing processes in statistical control by the time they entered into the production phase. In fact, only 3 of these programs indicated that they had even identified the key product characteristics or associated critical manufacturing processes—key initial steps to ensuring critical production elements are stable and in control. Failing to capture key manufacturing knowledge before producing the product can lead to inefficiencies and quality problems. For example, the Wideband Global SATCOM program encountered cost and schedule delays because contractor personnel installed fasteners incorrectly. Discovery of the problem resulted in extensive inspection and rework to correct the deficiencies, contributing to a 15-month schedule delay. In addition to demonstrating that the product can be built efficiently, our work has shown that production and post-production costs are minimized when a fully integrated, capable prototype is demonstrated to show it will work as intended and in a reliable manner. We found that many programs are susceptible to discovering costly problems late in development, when the more complex software and advanced capabilities are tested. Of the 33 programs that provided us data about the overlap between system development and production, almost three-quarters still had or planned to have system demonstration activities left to complete after production had begun. For 9 programs, the amount of system development work remaining was estimated to be over 4 years. This practice of beginning production before successfully demonstrating that the weapon system will work as intended increases the potential for discovering costly design changes that ripple through production into products already fielded. Forty programs we assessed provided us information on when they had or planned to have tested a fully configured, integrated production representative article (i.e., prototype) in the intended environment. Of these, 62 percent reported that they did not conduct or do not plan to conduct that test before a production decision. We also found examples where product reliability is not being demonstrated in a timely fashion. Making design changes to achieve reliability requirements after production begins is inefficient and costly. For example, despite being more than 5 years past the production decision, the Air Force’s Joint Air-to-Surface Standoff Missile experienced four failures during four flight tests in 2007, resulting in an overall missile reliability rate of less than 60 percent. The failures halted procurement of new missiles by the Air Force until the problems could be resolved. DOD’s poor acquisition outcomes stem from the absence of knowledge that disciplined systems engineering practices can bring to decision makers prior to beginning a program. Systems engineering is a process that translates customer needs into specific product requirements for which requisite technological, software, engineering, and production capabilities can be identified. These activities include requirements analysis, design, and testing in order to ensure that the product’s requirements are achievable given available resources. Early systems engineering provides knowledge that enables a developer to identify and resolve gaps before product development begins. Consequently, establishing a sound acquisition program with an executable business case depends on determining achievable requirements based on systems engineering that are agreed to by both the acquirer and developer before a program’s initiation. We have recently reported on the impact that poor systems engineering practices have had on several programs such as the Global Hawk Unmanned Aircraft System, F-22A, Expeditionary Fighting Vehicle, Joint Air-to-Surface Standoff Missile and others. When early systems engineering, specifically requirements analysis, is not performed, increased cost risk to the government and long development cycle times can be the result. DOD awards cost reimbursement type contracts for the development of major weapon systems because of the risk and uncertainty involved with its programs. Because the government often does not perform the necessary systems engineering analysis before a contract is signed to determine whether a match exists between requirements and available resources, significant contract cost increases can occur as the scope of the requirements change or becomes better understood by the government and contractor. Another potential consequence of the lack of requirements analysis is unpredictable cycle times. Requirements that are limited and well-understood contribute to shorter, more predictable cycle times. Long cycle times promote instability, especially considering DOD’s tendency to have changing requirements and program manager turnover. On the other hand, time- defined developments can allow for more frequent assimilation of new technologies into weapon systems and speed new capabilities to the warfighter. In fact, DOD itself suggests that system development should be limited to about 5 years. This year, we gathered new data focused on other factors we believe could have a significant influence on DOD’s ability to improve cost and schedule outcomes. These factors were changes to requirements after development began, the length of program managers’ tenure, reliance on contractors for program support, and difficulty managing software development. Foremost, several DOD programs in our assessment incurred requirement changes after the start of system development and experienced cost increases. Among the 46 programs we surveyed, RDT&E costs increased by 11 percent over initial estimates for programs that have not had requirements changes, while they increased 72 percent among those that had requirements changes (see fig. 2). At the same time, DOD’s practice of frequently changing program managers during a program’s development makes it difficult to hold them accountable for the business cases that they are entrusted to manage and deliver. Our analysis indicates that for 39 major acquisition programs started since March 2001, the average time in system development was about 37 months. The average tenure for program managers on those programs during that time was about 17 months—less than half of what is required by DOD policy. We also found that DOD is relying more on contractors to support the management and oversight of weapon system acquisitions and contracts. For 52 DOD programs that provided information, about 48 percent of the program office staff was composed of individuals outside of DOD (see table 2). In a prior review of space acquisition programs, we found that 8 of 13 cost-estimating organizations and program offices believed the number of cost estimators was inadequate and we found that 10 of those offices had more contractor personnel preparing cost estimates than government personnel. We also found examples during this year’s assessment where the program offices expressed concerns about having inadequate personnel to conduct their program office roles. Finally, as programs rely more heavily on software to perform critical functions for weapon systems, we found that a large number of programs are encountering difficulties in managing their software development. Roughly half of the programs that provided us software data had at least a 25 percent growth in their expected lines of code—a key metric used by leading software developers—since system development started. For example, software requirements were not well understood on the Future Combat Systems when the program began; and as the program moves toward preliminary design activities, the number of lines of software code has nearly tripled. Changes to the lines of code needed can indicate potential cost and schedule problems. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves (1) maintaining the right mix of programs to invest in by making better decisions as to which programs should be pursued given existing and expected funding and, more importantly, deciding which programs should not be pursued; (2) ensuring that programs that are started are executable by matching requirements with resources and locking in those requirements; and (3) making it clear that programs will then be executed based on knowledge and holding program managers responsible for that execution. We have made similar recommendations in past GAO reports. These changes will not be easy to make. They will require DOD to reexamine not only its acquisition process, but its requirement setting and funding processes as well. They will also require DOD to change how it views program success, and what is necessary to achieve success. This includes changing the environment and incentives that lead DOD and the military services to overpromise on capability and underestimate costs in order to sell new programs and capture the funding needed to start and sustain them. Finally, none of this will be achieved without a true partnership among the department, the military services, the Congress, and the defense industry. All of us must embrace the idea of change and work diligently to implement it. The first, and most important, step toward improving acquisition outcomes is implementing a new DOD-wide investment strategy for weapon systems. We have reported that DOD should develop an overarching strategy and decision-making processes that prioritize programs based on a balanced match between customer needs and available department resources---that is the dollars, technologies, time, and people needed to achieve these capabilities. We also recommended that capabilities not designated as a priority should be set out separately as desirable but not funded unless resources were both available and sustainable. This means that the decision makers responsible for weapon system requirements, funding, and acquisition execution must establish an investment strategy in concert. DOD’s Under Secretary of Defense for Acquisition, Technology and Logistics—DOD’s corporate leader for acquisition—should develop this strategy in concert with other senior leaders, for example, combatant commanders who would provide input on user needs; DOD’s comptroller and science and technology leaders, who would provide input on available resources; and acquisition executives from the military services, who could propose solutions. Finally, once priority decisions are made, Congress will need to enforce discipline through its legislative and oversight mechanisms. Once DOD has prioritized capabilities, it should work vigorously to make sure each new program is executable before the acquisition begins. More specifically, this means assuring requirements for specific weapon systems are clearly defined and achievable given available resources and that all alternatives have been considered. System requirements should be agreed to by service acquisition executives as well as combatant commanders. Once programs begin, requirements should not change without assessing their potential disruption to the program and assuring that they can be accommodated within time and funding constraints. In addition, DOD should prove that technologies can work as intended before including them in acquisition programs. More ambitious technology development efforts should be assigned to the science and technology community until they are ready to be added to future generations of the product. DOD should also require the use of independent cost estimates as a basis for budgeting funds. Our work over the past 10 years has consistently shown when these basic steps are taken, programs are better positioned to be executed within cost and schedule. To keep programs executable, DOD should demand that all go/no-go decisions be based on quantifiable data and demonstrated knowledge. These data should cover critical program facets such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Development should not be allowed to proceed until certain knowledge thresholds are met—for example, a high percentage of engineering drawings completed at critical design review. DOD’s current policies encourage these sorts of metrics to be used as a basis for decision making, but they do not demand it. DOD should also place boundaries on the time allowed for system development. To further ensure that programs are executable, DOD should pursue an evolutionary path toward meeting user needs rather than attempting to satisfy all needs in a single step. This approach has been consistently used by successful commercial companies we have visited over the past decade because it provides program managers with more achievable requirements, which, in turn, facilitate shorter cycle times. With shorter cycle times, the companies we have studied have also been able to assure that program managers and senior leaders stay with programs throughout the duration of a program. DOD has policies that encourage evolutionary development, but programs often favor pursuing more revolutionary, exotic solutions that will attract funds and support. The department and, more importantly, the military services, tend to view success as capturing the funding needed to start and sustain a development program. In order to do this, they must overpromise capability and underestimate cost. In order for DOD to move forward, this view of success must change. World-class commercial firms identify success as developing products within cost estimates and delivering them on time in order to survive in the marketplace. This forces incremental, knowledge-based product development programs that improve capability as new technologies are matured. To strengthen accountability, DOD must also clearly delineate responsibilities among those who have a role in deciding what to buy as well as those who have role in executing, revising, and terminating programs. Within this context, rewards and incentives must be altered so that success can be viewed as delivering needed capability at the right price and the right time, rather than attracting and retaining support for numerous new and ongoing programs. To enable accountability to be exercised at the program level once a program begins, DOD will need to (1) match program manager tenure with development or the delivery of a product; (2) tailor career paths and performance management systems to incentivize longer tenures; (3) strengthen training and career paths as needed to ensure program managers have the right qualifications for run the programs they are assigned to; (4) empower program managers to execute their programs, including an examination of whether and how much additional authority can be provided over funding, staffing, and approving requirements proposed after the start of a program; and (5) develop and provide automated tools to enhance management and oversight as well as to reduce the time required to prepare status information. DOD also should hold contractors accountable for results. As we have recommended, this means structuring contracts so that incentives actually motivate contractors to achieve desired acquisition outcomes and withholding fees when those goals are not met. DOD has taken actions related to some of these steps. Based in part on GAO recommendations and congressional direction, DOD has recently begun to develop several initiatives that, if adopted and implemented properly, could provide a foundation for establishing sound, knowledge- based business cases for individual acquisition programs and improving program outcomes. For example, DOD is experimenting with a new concept decision review, different acquisition approaches according to expected fielding times, and panels to review weapon system configuration changes that could adversely affect program cost and schedule. In addition, in September 2007 the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics issued a policy memorandum to ensure weapon acquisition programs were able to demonstrate key knowledge elements that could inform future development and budget decisions. This policy directed pending and future programs to include acquisition strategies and funding that provide for two or more competing contractors to develop technically mature prototypes through system development start (knowledge point 1), with the hope of reducing technical risk, validating designs and cost estimates, evaluating manufacturing processes, and refining requirements. Each of the initiatives is designed to enable more informed decisions by key department leaders well ahead of a program’s start, decisions that provide a closer match between each program’s requirements and the department’s resources. DOD also plans to implement new practices similar to past GAO recommendations that are intended to provide program managers more incentives, support, and stability. The department acknowledges that any actions taken to improve accountability must be based on a foundation whereby program managers can launch and manage programs toward greater performance, rather than focusing on maintaining support and funding for individual programs. DOD acquisition leaders have told us that any improvements to program managers’ performance hinge on the success of the department’s initiatives. In addition, DOD has taken actions to strengthen the link between award and incentive fees with desired program outcomes, which has the potential to increase the accountability of DOD programs for fees paid and of contractors for results achieved. In closing, the past year has seen several new proposed approaches to improve the way DOD buys weapons. These approaches have come from within the department, from highly credible commissions established by the department, and from GAO. They are based on solid principles. If they are to produce better results, however, they must heed the lessons taught—but perhaps not learned—by various past studies and by DOD’s acquisition history itself. Specifically, DOD must do a better job of prioritizing its needs in the context of the nation’s greater fiscal challenges. It must become more disciplined in managing the mix of programs to meet available funds. If everything is a priority, nothing is a priority. Policy must also be manifested in decisions on individual programs or reform will be blunted. DOD’s current acquisition policy is a case in point. The policy supports a knowledge-based, evolutionary approach to acquiring new weapons. However, the practice—decisions made on individual programs—sacrifices knowledge and realism about what can done within the available time and funding in favor of revolutionary solutions. Reform will not be real unless each weapon system is shown to be both a worthwhile investment and a realistic, executable program based on the technology, time, and money available. This cannot be done until the acquisition environment is changed along with the incentives associated with it. DOD and the military services cannot continue to view success through the prism of securing the funding needed to start and sustain new programs. Success must be defined in terms of delivering the warfighter capabilities when needed and as promised and incentives must be aligned to encourage a disciplined, knowledge-based approach to achieve this end. The upcoming change in administration presents challenges as well as opportunities to improve the process and its outcomes through sustained implementation of best practices, as well as addressing new issues that may emerge. Significant changes will only be possible with greater, and continued, department level support, including strong and consistent vision, direction, and advocacy from DOD leadership, as well as sustained oversight and cooperation from the Congress. In addition, all of the players involved with acquisitions—the requirements community; the Joint Chiefs of Staff; the comptroller; the Under Secretary of Defense for Acquisition, Technology and Logistics; and perhaps most importantly, the military services—must be unified in implementing reforms from top to bottom. Mr. Chairmen and Members of the Committee and Subcommittee, this concludes my statement. I will be happy to take any questions that you may have at this time. For further questions about this statement, please contact Michael J. Sullivan at (202) 512-4841. Individuals making key contributions to this statement include Ron Schwenn, Assistant Director; Ridge C. Bowman; Quindi C. Franco; Matthew B. Lea; Brian Mullins; Kenneth E. Patton, and Alyssa B. Weir. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: Feb. 1, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO- 07-406SP. Washington, D.C.: March 30, 2007. Best Practices: Stronger Practices Needed to Improve DOD Technology Transition Processes. GAO-06-883. Washington, D.C.: September 14, 2006. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 1, 2005. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Defense Acquisitions: Factors Affecting Outcomes of Advanced Concept Technology Demonstration. GAO-03-52. Washington, D.C.: December 2, 2002. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Programs Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisitions: Improved Program Outcomes Are Possible. GAO/T- NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | DOD's investment in weapon systems represents one of the largest discretionary items in the budget. The department expects to invest about $900 billion (fiscal year 2008 dollars) over the next 5 years on development and procurement with more than $335 billion invested specifically in major defense acquisition programs. Every dollar spent inefficiently in acquiring weapon systems is less money available for other budget priorities--such as the global war on terror and growing entitlement programs. This testimony focuses on (1) the overall performance of DOD's weapon system investment portfolio; (2) our assessment of 72 weapon programs against best practices standards for successful product developments; and (3) potential solutions and recent DOD actions to improve weapon program outcomes. It is based on GAO-08-467SP , which included our analysis of broad trends in the performance of the programs in DOD's weapon acquisition portfolio and our assessment of 72 defense programs, and recommendations made in past GAO reports. DOD was provided a draft of GAO-08-467SP and had no comments on the overall report, but did provide technical comments on individual assessments. The comments, along with the agency comments received on the individual assessments, were included as appropriate. We recently released our sixth annual assessment of selected DOD weapon programs. The assessment indicates that cost and schedule outcomes for major weapon programs are not improving. Although well-conceived acquisition policy changes occurred in 2003 that reflect many best practices we have reported on in the past, these policy changes have not yet translated into practice at the program level. None of the weapon programs we assessed this year had proceeded through system development meeting the best practices standards for mature technologies, stable design, and mature production processes--all prerequisites for achieving planned cost, schedule, and performance outcomes. In addition, only a small percentage of programs used two key systems engineering tools--preliminary design reviews and prototypes to demonstrate the maturity of the product's design by critical junctures. This lack of disciplined systems engineering affects DOD's ability to develop sound, executable business cases for programs. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves making tough decisions as to which programs should be pursued, and more importantly, not pursued; making sure programs are executable; locking in requirements before programs are ever started; and making it clear who is responsible for what and holding people accountable when responsibilities are not fulfilled. Moreover, the environment and incentives that lead DOD and the military services to overpromise on capability and underestimate costs in order to sell new programs and capture funding will need to change. Based in part on GAO recommendations and congressional direction, DOD has begun several initiatives that, if adopted and implemented properly, could provide a foundation for establishing sound, knowledge-based business cases for individual acquisition programs and improving outcomes. |
In response to global challenges the government faces in the coming years, the creation of a Department of Homeland Security provides a unique opportunity to create an extremely effective and performance-based organization that can strengthen the nation’s ability to protect its borders and citizens against terrorism. There is likely to be considerable benefit over time from restructuring some of the homeland security functions, including reducing risk and improving the economy, efficiency and effectiveness of these consolidated agencies and programs. Realistically, however, in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it fully effective. Recently, we testified that Congress should consider several very specific criteria in its evaluation of whether individual agencies or programs should be included or excluded from the proposed department. Those criteria include the following: Mission Relevancy: Is homeland security a major part of the agency or program mission? Is it the primary mission of the agency or program? Similar Goals and Objectives: Does the agency or program being considered for the new department share primary goals and objectives with the other agencies or programs being consolidated? Leverage Effectiveness: Does the agency or program being considered for the new department create synergy and help to leverage the effectiveness of other agencies and programs or the new department as a whole? In other words, is the whole greater than the sum of the parts? Gains Through Consolidation: Does the agency or program being considered for the new department improve the efficiency and effectiveness of homeland security missions through eliminating duplications and overlaps, closing gaps and aligning or merging common roles and responsibilities? Integrated Information Sharing/Coordination: Does the agency or program being considered for the new department contribute to or leverage the ability of the new department to enhance the sharing of critical information or otherwise improve the coordination of missions and activities related to homeland security? Compatible Cultures: Can the organizational culture of the agency or program being considered for the new department effectively meld with the other entities that will be consolidated? Field structures and approaches to achieving missions vary considerably between agencies. Impact on Excluded Agencies: What is the impact on departments losing components to the new department? What is the impact on agencies with homeland security missions left out of the new department? Federally sponsored research and development efforts, a key focus of the proposed legislation, enhance the government’s capability to counter chemical, biological, radiological, and nuclear terrorist threats by providing technologies that meet a range of crisis- and consequence- management needs. Research and development efforts for these technologies, however, can be risky, time consuming, and costly. Such efforts also may need to address requirements not available in off-the-shelf products. These factors limit private and public research and development efforts for these technologies, necessitating federal government involvement and collaboration. Many federal agencies and interagency working groups have recently deployed or are conducting research on a variety of technologies to combat terrorism. Recently deployed technologies include a prototype biological detection system used at the Salt Lake City Olympics and a prototype chemical detection system currently being used in Washington D.C.’s metro system that was developed by DOE. Technologies under development include new or improved vaccines, antibiotics, and antivirals being developed by the National Institutes of Health. In addition, the Centers for Disease Control and Prevention, in collaboration with other federal agencies, are conducting research on the diagnosis and treatment of smallpox. Moreover, the Food and Drug Administration is investigating a variety of biological agents that could be used as terrorist weapons. Other federal agencies such as the Department of Defense and intelligence community are engaged in similar research and development activities, such as research on technology to protect combatants from chemical and biological agents. Certain roles and responsibilities of the Department of Homeland Security in managing research and development need to be clarified. Under the proposed legislation, the Department of Homeland Security would be tasked with developing national policy for and coordinating the federal government’s civilian research and development efforts to counter chemical, biological, radiological, and nuclear threats. However, while coordination is important, it will not be enough. Federal agency coordination alone may not address the specific needs of state and local governments, such as those of local police and fire departments that will use this technology. In our view, the proposed legislation should also specify that a role of the new department will be to develop collaborative relationships with programs at all levels of government—federal, state, and local—to ensure that users’ needs and research efforts are linked. We also believe the legislation should be clarified to ensure that the new department would be responsible for the development of a single national research and development strategic plan. Such a plan would help to ensure that research gaps are filled, unproductive duplication is minimized, and individual agency plans are consistent with the overall goals. Moreover, the proposed legislation, as written, is unclear about the new department’s role in developing standards for the performance and interoperability of new technologies to address terrorist threats. We believe the development of these standards must be a priority of the new department. The limitations of existing coordination and the critical need for a more collaborative, unified research structure has been amply demonstrated in the recent past. We have previously reported that while agencies attempt to coordinate federal research and development programs in a variety of ways, breakdowns occur, leading to research gaps and duplication of effort. Coordination is limited by compartmentalization of efforts because of the sensitivity of the research and development programs, security classification of research, and the absence of a single coordinating entity to ensure against duplication. For example, the Department of Defense’s Defense Advanced Research Projects Agency was unaware of U.S. Coast Guard’s plans to develop methods to detect biological agents on infected cruise ships and, therefore, was unable to share information on its potentially related research to develop biological detection devices for buildings. Although the proposed legislation states that the new department will be responsible for developing national policy and coordinating research and development, it has a number of limitations that could weaken its effectiveness. First, the legislation tasks the new department with coordinating the federal government’s “civilian efforts” only. We believe the new department will also need to coordinate with the Department of Defense and the intelligence agencies that conduct research and development efforts designed to detect and respond to weapons of mass destruction. The proposed transfer of some DOE research and development efforts to the Department of Homeland Security also does not eliminate potential overlaps, gaps, and opportunities for collaboration. Coordination will still be required within and among the 23 DOE national laboratories. For example, our 2001 report noted that two offices within Sandia National Laboratory concurrently and separately worked on similar thermal imagery projects for two different federal agencies, rather than consolidating the requests and combining resources. In addition, local police and fire departments and state and local governments possess practical knowledge about their technological needs and relevant design limitations that should be taken into account in federal efforts to provide new equipment, such as protective gear and sensor systems. To be most effective, the new department will have to develop collaborative relationships with all these organizations to facilitate technological improvements and encourage cooperative behavior. The existing proposal leaves a number of problems unaddressed as well. For example, while the proposed legislation is clear that the position of Undersecretary for Chemical, Biological, Radiological, and Nuclear Countermeasures will be responsible for developing national policy for federal research and development, there is no requirement for a strategic plan for national research and development that could address coordination, reduce potential duplication, and ensure that important issues are addressed. In 2001, we recommended the creation of a unified strategy to reduce duplication and leverage resources, and suggested that the plan be coordinated with federal agencies performing research as well as with state and local authorities. The development of such a plan would help to ensure that research gaps are filled, unproductive duplication is minimized, individual agency plans are consistent with the overall goals, and a basis for assessing the success of the research and development efforts. Also, while the legislation calls for the establishment of guidelines for state and local governments to implement countermeasures for chemical, biological, radiological, and nuclear terrorism threats, it is not clear to us what these guidelines are to entail. In this regard, we believe it will be important to develop standards for the performance and interoperability of new technologies, something that the legislation does not specifically address. For example, we had discussions with officials from the Utah State Department of Health who prepared for the 2002 Winter Olympic Games. These officials said that local police and fire departments had been approached by numerous vendors offering a variety of chemical and biological detection technology for use during the Olympics. However, these state and local officials were unsure of the best technology to purchase and could find no federal agency that would provide guidance on the technologies. They told us that if the science backing up the technology is poor or the data the technology produces are faulty, the technology can do more harm than good. Further, the legislation would allow the new department to direct, fund, and conduct research related to chemical, biological, radiological, nuclear, and other emerging threats on its own. This raises the potential for duplication of efforts, lack of efficiency, and an increased need for coordination with other departments that would continue to carry out relevant research. We are concerned that the proposal could result in a duplication of capacity that already exists in the current federal laboratories. Under Title III of the proposed legislation, a number of DOE programs and activities would be transferred to the new department. Some of these transfers seem appropriate. However, in other cases we are concerned about the transfers because of the potential impact on programs and activities that currently support missions beyond homeland security. Finally, in some cases, transfers proposed by the legislation are not laid out in enough detail to permit an assessment. We discuss each of these groups of transfers below. Title III proposes to transfer to the Department of Homeland Security certain DOE activities that seem appropriate. Specifically, Title III proposes to transfer the nuclear threat assessment program and activities of the assessment, detection, and cooperation program in DOE’s international Materials, Protection, and Accountability Program (MPC&A). The threat assessment program and activities, among other things, assesses the credibility of communicated nuclear threats, analyzes reports of illicit nuclear material trafficking, and provides technical support to law enforcement agencies regarding nuclear material/weapons. We would agree with officials of the Office of Nuclear Threat Assessment and Detection who view the potential transfer to the Department of Homeland Security positively. We base our agreement on the fact that, according to officials from DOE, the transfer would not have a negative impact on the rest of the MPC&A program because the functions are separate and distinct. Further, the transfer could tie the office in more closely with the other agencies they work with, such as Customs. Another program that we believe could be appropriately transferred to the new department is the Environmental Measurements Laboratory (EML), located in New York City. This government-operated laboratory operates under the Office of Science and Technology in the Office of Environmental Management at DOE. EML provides program management, technical assistance and data quality assurance for measurements of radiation and radioactivity relating to environmental restoration, global nuclear nonproliferation, and other priority issues for DOE, as well as for other government, national and international organizations. According to the laboratory director, the laboratory is completely in favor of the transfer to the proposed Department of Homeland Security and would fit in very well with it. We believe the transfer is appropriate because, unlike some other transfers proposed under Title III, the entire laboratory would be transferred. While it is a multiprogram laboratory serving several elements of DOE as well as other organizations, serving multiple clients could continue under a “work for others” contracting arrangement whether the laboratory was housed within DOE or the Department of Homeland Security. Title III proposes transferring the parts of DOE’s nonproliferation and verification research and development program that conduct research on systems to improve the nation’s capability to prepare for and respond to chemical and biological attacks. The legislation also proposes transferring a portion of the program’s proliferation detection research. This includes work on developing sensors to help the Coast Guard monitor container shipping at ports of entry. These proposed transfers raise concerns because much of the program’s research supports both homeland security and international nonproliferation programs. These programs have broad missions that are not easily separated into homeland security research and research for other purposes and the proposed legislation is not clear how these missions would continue to be accomplished. Furthermore, we are concerned that the legislation does not clearly indicate whether only the programmatic management and funding would move or also the scientists carrying out the research. Moving the scientists may not be prudent. This is because the research is currently conducted by multiprogram laboratories that employ scientists skilled in many disciplines who serve many different missions and whose research benefits from their interactions with colleagues within the laboratory. In addition, we believe transferring control of some scientists within the DOE national laboratories to the Department of Homeland Security could complicate an already dysfunctional DOE organizational structure by blurring lines of authority and accountability. DOE carries out its diverse missions through a network of multilayered field offices that oversee activities at the national laboratories and other DOE facilities widely dispersed throughout the country. The structure inherited by DOE and the different program cultures and management styles within that structure have confounded DOE’s efforts to develop a more effective organization. Transferring control of scientists within the national laboratories could complicate the accomplishment of homeland security missions and DOE’s other missions by adding additional lines of authority and accountability between the laboratory scientists, DOE, and the Department of Homeland Security. One alternative would be for the new department to contract with DOE’s national laboratories to conduct the research under “work for others” contracts. This would allow for direct contact between the Department of Homeland Security and the laboratories conducting the research without creating a new bureaucracy. Many federal agencies such as the Department of Defense and intelligence agencies currently use this contracting arrangement with the national laboratories. We have similar concerns about transferring two other activities to the new department: The advanced scientific computing research program and activities at Lawrence Livermore National Laboratory are developing supercomputer hardware and software infrastructure aimed at enabling laboratory and university researchers to solve the most challenging scientific problems at a level of accuracy and detail never before achieved. Research conducted under the program include; designing materials atom-by-atom, revealing the functions of proteins, understanding and controlling plasma turbulence, designing new particle accelerators and modeling global climate change. This program is an integral part of DOE’s efforts to ensure that the nuclear weapons stockpile is safe and secure. This program may be difficult to separate into homeland security research and research for other purposes. The Life Sciences Division within the DOE Office of Science’s Biological and Environmental Research Program manages a diverse portfolio of research to develop fundamental biological information and to advance technology in support of DOE’s missions in biology, medicine, and the environment. For example, it is determining the whole genome sequences of a variety of infectious bacteria, including anthrax strains—a first step toward developing tests that can be used to rapidly identify their presence in the environment. In both of these instances, the programs serve multiple missions. These dual-purpose programs have important synergies that we believe should be maintained. We are concerned that transferring control over these programs to the new department has the potential to disrupt some programs that are critical to other DOE missions, such as the reliability of our nuclear weapons. We do not believe that the proposed legislation is sufficiently clear on how both the homeland security and these other missions would be accomplished. The details of two other transfers proposed in the legislation are unclear. First, Title III would transfer the intelligence program activities at Lawrence Livermore National Laboratory. These intelligence activities are related to the overall program carried out by DOE’s Office of Intelligence. The Office of Intelligence gathers information related to DOE’s missions— energy, nuclear weapons, nuclear proliferation, basic science, radiological research and environmental cleanup. To support this overall intelligence program, Lawrence Livermore National Laboratory, like other weapons laboratories, conducts intelligence activities. At Lawrence Livermore, the “Z” division conducts these activities and has special intelligence expertise related to tracking the nuclear capabilities of countries other than Russia and China. Importantly, the “Z” division receives funding from other DOE programs and/or offices as well as funding from other federal agencies (Department of Defense, Federal Bureau of Investigation, Central Intelligence Agency, etc.). According to officials at DOE Headquarters and Lawrence Livermore National Laboratory, only about $5 million of the division’s $30-50 million budget comes from DOE’s Office of Intelligence. These officials said the transfer would most likely affect only the $5 million that DOE’s Office of Intelligence directly provides to the laboratory, but this is not clear in the proposed legislation. As with other DOE programs discussed in this testimony, the staff that carry out these activities are contractor employees and it is not clear how they would be transferred to the Department of Homeland Security. Moreover, DOE headquarters and other laboratories also have a role in intelligence, and the legislation does not propose to transfer any of their intelligence functions. Another area of Title III where the details are unclear is the transfer of “energy security and assurance program activities.” These activities are carried out by the Office of Energy Assurance, which was created in November 2001 to work with state and local government and industry to strengthen the security of the United States through the application of science and technology to improve the reliability and security of the national energy infrastructure. The national energy infrastructure includes (1) physical and cyber assets of the nation’s electric power, oil, and natural gas infrastructures; (2) interdependencies among physical and cyber energy infrastructure assets; (3) national energy infrastructure’s interdependencies with all other critical national infrastructures. At the time this testimony was being prepared, DOE and the Office of Homeland Security were trying to define the scope of the proposed transfer. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. Contact and Acknowledgments For further information about this testimony, please contact Gary Jones at (202) 512-3841. Gene Aloise, Seto J. Bagdoyen, Ryan T. Coles, Darryl W. Dutton, Kathleen H. Ebert, Laurie E. Ekstrand, Cynthia Norris and Keith Rhodes also made key contributions to this testimony. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-901T. Washington, D.C.: July 3, 2002 Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-900T. Washington, D.C.: July 2, 2002 Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-899T. Washington, D.C.: July 1, 2002 Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, but Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Homeland Security: Responsibility and Accountability for Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01-915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. National Preparedness: Technologies to Secure Federal Buildings. GAO-02-687T. Washington, D.C.: April 25, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Chemical and Biological Defense: DOD Should Clarify Expectations for Medical Readiness. GAO-02-219T. Washington, D.C.: November 7, 2001. Anthrax Vaccine: Changes to the Manufacturing Process. GAO-02-181T. Washington, D.C.: October 23, 2001. Chemical and Biological Defense: DOD Needs to Clarify Expectations for Medical Readiness. GAO-02-38. Washington, D.C.: October 19, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-02-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-666T. Washington, D.C.: May 1, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, DC: April 24, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-463. Washington, D.C.: March 30, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/T-HEHS/AIMD-00-59. Washington, D.C.: March 8, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/HEHS/AIMD-00-36. Washington, D.C.: October 29, 1999. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Chemical and Biological Defense: Coordination of Nonmedical Chemical and Biological R&D Programs. GAO/NSIAD-99-160. Washington, D.C.: August 16, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/T-NSIAD-99-184. Washington, D.C.: June 23, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO/NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Observations on Crosscutting Issues. GAO/T-NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. Chemical Weapons: FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-832. Washington, D.C.: July 9, 2001. Budget Issues: Long-Term Fiscal Challenges. GAO-02-467T. Washington, D.C.: February 27, 2002. Results-Oriented Budget Practices in Federal Agencies. GAO-01-1084SP. Washington, D.C.: August 2001. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap. GAO-AIMD-97-146. Washington, D.C.: August 29, 1997. Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches. GAO/T-AIMD-95-161. Washington, D.C.: June 7, 1995. Government Reorganization: Issues and Principles. GAO/T-GGD/AIMD-95-166. Washington, D.C.: May 17, 1995. Grant Programs: Design Features Shape Flexibility, Accountability, and Performance Information. GAO/GGD-98-137. Washington, D.C.: June 22, 1998. Federal Grants: Design Improvements Could Help Federal Resources Go Further. GAO/AIMD-97-7. Washington, D.C.: December 18, 1996. | Title III of the proposed Department of Homeland Security legislation would task the new department with developing national policy and coordinating the federal government's research and development efforts for responding to chemical, biological, radiological, and nuclear threats. It would also transfer to the new department responsibility for certain research and development programs and other activities, including those of the Department of Energy (DOE). If properly implemented, this proposed legislation could lead to a more efficient, effective and coordinated research effort that would provide technology to protect our people, borders, and critical infrastructure. However, the proposed legislation does not specify that a critical role of the new department will be to establish collaborative relationships with programs at all levels of government and to develop a strategic plan for research and development to implement the national policy it is charged with developing. In addition, the proposed legislation is not clear on the role of the new department in setting standards for the performance and interoperability of new technologies so that users can be confident that the technologies they are purchasing will perform as intended. Some of the proposed transfers of activities from DOE to the new department are appropriate, such as the DOE's nuclear threat assessment program and the Environmental Measurements Laboratory. However, the transfer of some DOE research and development activities may complicate research now being done to accomplish multiple purposes. |
Pursuant to a series of laws passed since 1989, DOD is authorized to undertake actions intended to enhance the effectiveness of domestic law enforcement agencies through direct or material support. DLA Disposition Services administers the LESO program, managing the transfer of DOD’s excess controlled and non-controlled property to federal, state, local, and tribal law enforcement agencies. According to DLA policy, to participate in the program, a law enforcement agency must be a federal, state, or local government agency whose primary function is the enforcement of applicable federal, state, and local laws and whose sworn compensated law enforcement officers have powers of arrest and apprehension. According to LESO program data, as of August 2016, there were over 8,600 federal, state, and local law enforcement agencies participating in the program. Of these, approximately 96 percent were state and local law enforcement agencies and approximately 4 percent were federal law enforcement agencies. LESO program data also shows that during calendar years 2013 through 2015, approximately two-thirds of DOD excess controlled property items had been transferred to state and local agencies, and one-third had been transferred to federal agencies, as shown in Table 1. The data also showed that as of August 2016 the majority of the law enforcement agencies (76 percent) participating in the program have 50 or fewer full-time sworn officers, as shown in figure 1, and that approximately 30 percent of state and local law enforcement agencies active in the program had 10 or fewer sworn officers. State and local law enforcement agencies work first through their Governor-appointed state coordinator to obtain excess property through the LESO program from DLA. As specified by DLA policy, state coordinators must sign a memorandum of agreement (MOA) with DLA, which outlines, for example, general terms and conditions for state coordinators and state and local law enforcement agencies to provide accountability and oversight of the LESO program. Further, LESO program guidance requires state coordinators to develop a State Plan of Operation outlining how the program will be managed in their state, and each participating state or local law enforcement agency must sign the plan, attesting to the terms and conditions of the program. According to LESO officials, unlike state and local agencies, federal law enforcement agencies work directly with the LESO program office. As of December 2016, the LESO program office finalized a memorandum of understanding (MOU) that it plans to sign with participating federal agencies. According to the MOU, it will establish DLA’s authority as the owner of the program and, among other things, establish general terms and conditions. Figure 2 provides additional details on LESO program stages and processes for federal and state and local law enforcement agencies. Law enforcement agencies submit property requests electronically—after viewing available items online or in person while at a Disposition Services’ site. According to LESO officials, they manually review all property requests forwarded from state coordinators for state and local law enforcement agencies, as well as requests submitted directly from federal agencies, for final approval or denial. LESO officials told us they look for detailed justifications, including who will use the property and how, when reviewing requests for approval. For certain items, such as aircraft, vehicles, and weapons, law enforcement agencies are required to answer additional questions and provide additional documentation, such as their training plan(s) and how the items will be secured. According to LESO officials, they follow statutory direction in U.S. Code title 10 which authorizes preference be given to property requests indicating that the use of the property will be for counterdrug, counterterrorism, and border- security activities. LESO officials also stated that a request for excess property may be denied for a variety of reasons, including if the request was not detailed enough or if the law enforcement agency has met its allowed allocation for certain property. For example, according to program documentation, for small arms, only one type is allocated for each qualified full-time or part-time officer; for HMMWVs, only one vehicle is allocated for every three officers; and for MRAPs, only one vehicle is allocated per law enforcement agency. However, according to LESO officials, most denials are because a requested item has already been awarded. When DOD declares items as excess to its needs, the property is turned into a DLA Disposition Services site and can be made available for transfer to DOD components and other eligible recipients, including approved LESO participants. According to program documentation, when an application to participate in the LESO program is approved, an Authorization Letter for Property Screening is generated and forwarded to the state coordinator or federal agency. If the approved participant is a state law enforcement agency, the state coordinator will provide the participant with the letter of authorization. The letter of authorization that includes, for example, the full name of the law enforcement agency, DOD activity address code, telephone number, address, and digital signatures, must be on the centralized file maintained by DLA prior to the arrival of the person picking up the property and be dated less than one year from the current date. The screening authorization lists individuals eligible to search, view and request property on behalf of their participating law enforcement agency, including physical on-site screening. The DLA Disposition Services’ site uses the information on this letter to contact an agency, if needed, to coordinate the direct pickup of property. Direct pickup for allocated property may be made by an individual with a valid identification and the appropriate DOD authorization form that is signed by the authorized individual listed in the screener letter. The President issued Executive Order 13688, Federal Support for Local Law Enforcement Equipment Acquisition, January 16, 2015, to better coordinate federal support for the acquisition of certain federal equipment by state, local and tribal law enforcement agencies. The Executive Order also established a Federal Interagency Law Enforcement Equipment Working Group (hereafter the Working Group). In May 2015, the Working Group issued a report that included a list of prohibited equipment not eligible for acquisition by law enforcement agencies and a list of controlled equipment identified by category of equipment that may be acquired by law enforcement agencies after submitting additional information such as a detailed justification for each requested item of controlled equipment. Further, the Working Group developed 13 programmatic and policy recommendations to improve federal equipment acquisition programs, including that the members of the Working Group form a permanent Federal Interagency Law Enforcement Equipment Working Group, calling for this permanent working group to meet regularly to support oversight and policy development functions for controlled equipment programs. DLA has taken some actions and plans additional actions to address identified weaknesses in its excess controlled property program. DLA has revised its policy and procedures, is developing additional training, and is establishing MOUs for the LESO program with participating federal law enforcement agencies. However, DLA confirmed, and our independent testing of the LESO program’s internal controls identified, deficiencies in the processes for the verification and approval of federal law enforcement applications and the transfer of controlled property. DLA has taken some steps to address identified weaknesses in its processes for transferring and monitoring its excess controlled property through revisions to its policy and procedures on the management, oversight, and accountability of the LESO program. Such revisions were made, in part, because of recommendations made by the DOD and DLA Offices of Inspector General. The DOD and DLA Offices of Inspector General conducted four audits of the LESO program between 2003 and 2013 that identified more than a dozen recommendations, such as to: develop and implement written standard operating procedures that include, for example, criteria for approval and disapproval of law enforcement agency property requests; strengthen policy and procedures on disbarring law enforcement agencies and state coordinators that do not comply with LESO program conditions; improve oversight and accountability of property; use the automated processing system for requisitioning, approving, and issuing items; and further develop procedures for the issuance, transfer, turn-in and disposal of LESO property. We found the department had taken the following actions to enhance its transfer process through revisions to policy and procedures: transitioned full management responsibility of the LESO Program to DLA Disposition Services in 2009; developed LESO Program Standard Operating Procedures in 2012 and updated them in 2013; transitioned to a new data system, Federal Excess Property Management Information System in 2013 after identifying that the old system was not capable of post-issue tracking; revised the DLA instruction that provides policy, responsibility, and procedures for DLA’s management responsibilities of the LESO program in 2014 and 2016; and revised LESO program processes in 2016 to incorporate recommendations made by the Federal Interagency Law Enforcement Equipment Working Group, such as defining executive order- controlled property or prohibiting schools K-12 from participating in the program. Additionally, according to LESO officials, they conduct Program Compliance Reviews every two years for all states and territories with state and local law enforcement agencies enrolled in the LESO program. LESO officials, in consultation with the state coordinator, select a sample of state and local law enforcement agencies for site visits to physically verify all controlled property in their possession, by serial number. We observed the LESO Program Compliance Review at numerous locations in one state. Moreover, in 2017, LESO program officials revised their program application to create two applications—one for federal law enforcement agencies and the other for state and local law enforcement agencies. The application for federal agencies requires additional information from prospective federal applicants, such as certification that their agency meets the LESO definition of a government law enforcement agency and attesting that the agency that they represent is a legitimate law enforcement agency. Likewise, the application for state and local agencies was also revised to include a similar certification of eligibility and attestation. During our review, officials at participating law enforcement agencies— federal, state, and local—reported the need for more training on LESO program policies and procedures, and DLA is in the process of developing this additional training through an online training tool. Our analysis of the responses to our surveys and case study interviews with federal, state, and local law enforcement agencies showed that (1) not all participating agency officials had received training on all aspects of the LESO program, including its policies and procedures; and that (2) officials wanted more training to better understand, for example, LESO program processes, such as the turn-in or transfer of controlled property. Our analysis of the responses to our survey of select federal law enforcement agencies showed that training had not been regularly provided. For example, 10 of the 13 respondents to the federal survey stated that either they did not receive training from LESO or they did not know if their agency had received any training from LESO regarding the LESO program. LESO officials told us that they have not regularly provided training to federal law enforcement agencies in the past, with training mainly provided to the state coordinators participating in the LESO program. Survey results of federal law enforcement agencies also showed that officials generally stated that training on the LESO program would be beneficial. For example, 9 of the 13 respondents to the federal survey stated that refresher training provided by LESO would be beneficial to their agency. Table 2 shows the types of refresher training that most federal law enforcement agencies in our survey stated would be beneficial to their agency. Furthermore, officials we interviewed from state and local law enforcement agencies reported different experiences about the availability and accessibility of training on policies and procedures of the LESO program from their state coordinators and stated that they would benefit from additional training on policies and procedures, such as on returning property to DLA. For example, an official from one law enforcement agency we interviewed told us that it took 8 months to receive training from his state coordinator upon joining the program. Our analysis of the results from our survey of state coordinators showed that nearly three-fourths of the state coordinators reported that they do not provide mandatory training on LESO program policies and procedures to state and local law enforcement agencies within their state. We also found that state coordinators varied in the types of training they provided on LESO program policies and procedures. For example, our analysis found that 40 percent (18 of 45) of responding state coordinators reported that they do not provide in-person refresher or annual training and 15 percent (7 of 46 responding to the question) reported that they do not provide training aids or reference aids (i.e., PowerPoint format). The majority of state coordinators (94 percent) reported they would find additional LESO training modules helpful. Table 3 shows our analysis of the survey responses on the topics state coordinators indicated that additional training would be useful. See Appendix IV for additional details on the results regarding training from our survey. LESO provides some training aids on program policies and procedures on their program website for federal, state and local agencies. LESO also provides training to state coordinators at an annual training seminar, and then, according to program guidance, state coordinators are to train state and local law enforcement agencies in their states. However, over the course of our review, DLA officials stated that they recognize the need to enhance aspects of training and are in the process of developing an online training tool, which is expected to be established in late 2017. Specifically, LESO program officials stated that they are enhancing training by working to establish an online training tool that will assist in providing specific information and training modules on LESO program policies and procedures to federal law enforcement agencies and that state coordinators can provide to state and local law enforcement agencies in their states. Some training modules have been completed and published on LESO’s website, such as a quick-start guide. Other training modules that are planned include, for example, a guide for returning controlled property for proper disposal, among other program policies and procedures. We acknowledge that DLA to date has taken action on the issue by recognizing the need for additional training, assigning a lead, and developing a quick start tool. However, it is too early to evaluate whether the actions taken and the developed and planned training will address the issues our survey and case studies identified. DLA is establishing MOUs with federal law enforcement agencies. Until 2016, DLA lacked a mechanism to establish the general terms and conditions of the participating federal law enforcement agencies, such as restrictions on further transfer or sale of controlled property. According to DLA and LESO officials, LESO officials had discussed taking steps to develop a MOU in past years for federal law enforcement agencies, similar to those of state and local agencies. DLA expedited and completed the development of a MOU in December 2016, in part, because federal law enforcement agencies began contacting the LESO program office regarding gaining visibility over items transferred to their respective agencies including subordinate agencies and their field offices and had questions regarding who was authorized to screen and request property for their agency. These inquiries were in part a result of our effort to confirm which federal agencies, including their subordinate agencies, had received excess controlled property, and some of them did not know that their subordinate agencies had obtained excess controlled property through the LESO program. In our survey of 15 federal law enforcement agencies, completed in October 2016, we found federal law enforcement officials were unaware of the extent to which their agency requests and receives DOD-controlled property through the program. For example, 5 of the 13 federal survey respondents reported they either did not internally track or did not know if their agency internally tracked DOD- controlled property obtained by their field offices through the LESO program. As of April 2017, DLA and LESO officials had sent the MOU to all participating federal law enforcement agencies and 7 had been signed. LESO program officials told us they have assigned a LESO official to lead the federal agency aspect of the LESO program, including assisting DLA Disposition Services in finalizing the MOUs and establishing designated points of contact at all participating federal agencies’ headquarters. For example, according to LESO officials, LESO is working with designated points of contact at the federal agencies to establish a more centralized approval process to increase federal agencies’ visibility over property requests submitted by federal agency field offices, prior to the requests being approved by LESO officials. DLA officials estimated that MOUs will be established with all participating federal agencies by mid-2017. Given that the MOUs have either been recently established or are in the process of being finalized with some federal agencies, it is too early to evaluate the effect of the MOUs in improving the management of the LESO Program. See Appendix V for additional details on the results of the survey of federal law enforcement agencies. Our independent testing of the LESO program’s internal controls identified deficiencies in the processes for verification and approval of federal law enforcement agency applications. Specifically, our investigators, posed as authorized federal law enforcement agency officials of a fictitious agency, applied and were granted access to the LESO program in early 2017. In late 2016, we emailed our completed application to the LESO program office. Our application contained fictitious information including agency name, number of employees, point of contact, and physical location. We also created mail and e-mail addresses, and a website for our fictitious law enforcement agency using publicly available resources. All correspondence, including follow-up questions regarding our application, was conducted by email with LESO officials. For example, after reviewing our initial application, LESO officials informed us that we needed to revise specific information on the application and resubmit it, indicating that when we did so we would be approved to participate in the program. In early 2017, we resubmitted our application and soon thereafter we were notified that our fictitious law enforcement agency was approved to participate in the LESO program. LESO officials also emailed us to request confirmation of our agency’s authorizing statute; in response, our investigators provided fictitious authorizing provisions presented as a provision in the U.S. Code. At no point during the application process did LESO officials verbally contact officials at the agency we created—either the main point of contact listed on the application or the designated point of contact at a headquarters’ level—to verify the legitimacy of our application or to discuss establishing a MOU with our agency. According to DLA policy, DLA is responsible for ensuring the successful implementation of the LESO program and for issuing program policy, procedures, and guidance in agency instructions and manuals. However, DLA’s internal controls for verifying and approving federal agency applications and enrollment in the LESO program were not adequate to prevent the approval of a fraudulent application to obtain excess controlled property. LESO’s reliance on electronic communications without actual verification does not allow it to properly vet for potentially fraudulent activity. For example, DLA did not require supervisory approval for all federal agency applications, or require confirmation of the application with designated points of contact at the headquarters of participating federal agencies. Additionally, at the time we submitted our application, DLA officials did not visit the location of the applying federal law enforcement agency to help verify the legitimacy of the application. However, after our briefing of DLA officials in March 2017 on the results of our investigative work, DLA officials stated they took immediate action, and in April 2017 have visited 13 participating federal law enforcement agencies. Further, DLA has not reviewed and revised the policy or procedures for verifying and approving federal agency applications and enrollment in the LESO program. Without reviewing and revising the internal controls in policy or procedures for verifying and approving federal agency applications and enrollment in the LESO program, DLA and LESO management will lack reasonable assurance of the legitimacy of applicants before transferring valuable, and in some cases potentially lethal, controlled property. Our independent testing of DLA’s internal controls also identified deficiencies in the transfer of controlled property, such as DLA personnel not routinely requesting and verifying identification of individuals picking up controlled property or verifying the quantity of approved items prior to transfer. Our investigators, after being approved to participate in the LESO program, obtained access to the department’s online systems to view and request controlled property. We subsequently submitted requests to obtain controlled property, including non-lethal items and potentially-lethal items if modified with commercially available items. In less than a week after submitting the requests, our fictitious agency was approved for the transfer of over 100 controlled property items with a total estimated value of about $1.2 million. The estimated value of each item ranged from $277 to over $600,000, including items such as night-vision goggles, reflex (also known as reflector) sights, infrared illuminators, simulated pipe bombs, and simulated rifles. Our investigator scheduled appointments, visited three Disposition Service sites, and obtained the controlled property items, as shown in Figure 3. Using fictitious identification and law enforcement credentials, along with the LESO-approved documentation, our investigator was able to pass security checks and enter the Disposition Service warehouse sites. Personnel at two of the three sites did not request or check for valid identification of our investigator picking up the property. According to DLA guidance, direct pickup of allocated property may be made by an individual with a valid identification and the appropriate DOD authorization form that is signed by the authorized individual listed in the letter. DLA has not ensured compliance that on-site officials routinely request and verify valid identification of the individual(s) authorized to pick up allocated property from the LESO program, as required by the guidance. However, DLA officials acknowledged they could take additional steps to ensure compliance with the requirements in the handbook. If DLA does not ensure that Disposition Services on-site officials routinely request and verify valid identification, then DLA will lack reasonable assurance that controlled property is transferred to authorized individuals. Furthermore, although we were approved to receive over 100 items and the transfer documentation reflects this amount, we were provided more items than we were approved for. The discrepancy involved one type of item—infrared illuminators. We requested 48 infrared illuminators but on- site officials at one Disposition Services site provided us with 51 infrared illuminators in 52 pouches, of which one pouch was empty. Additionally, we found that one Disposition Services site had a checklist as a part of their transfer documentation for their personnel to complete. The checklist required manual completion of a number of items, including quantity, date, and who fulfilled the order. The other two Disposition Services sites, including the site that transferred the wrong quantity, did not include this checklist with the transfer documentation we received. DLA guidance states that accountability records be maintained in auditable condition to allow property to be traced from receipt to final disposition. Also, the Standards for Internal Control in the Federal Government state that management may design a variety of transaction control activities for operational processes, which may include verifications, reconciliations, authorizations and approvals, physical control activities, and supervisory control activities. Additionally, DLA has guidance that describes procedures for managing and handling, among other things, sensitive, and pilferable controlled inventory items but does not specifically address all items that are transferred to law enforcement agencies. Without guidance that specifically requires DLA Disposition Services’ on- site officials to verify the type and quantity of approved items against the actual items being transferred prior to removal from the sites, then DLA will lack reasonable assurance that the approved items transferred are appropriately reflected in their inventory records. While DLA has taken some steps, mostly in early 2017, to address identified deficiencies in the LESO program, DLA lacks a comprehensive framework for instituting fraud prevention and mitigation measures. During the course of our review, DLA revised the LESO program applications by requiring applicants to sign an attestation that the agency that they represent is a legitimate law enforcement agency. Further, DLA officials stated they are more carefully reviewing the legitimacy of some information on the application such as email addresses and physically visiting federal agencies that enter into MOUs with the LESO program. However, as previously discussed, we identified internal controls weakness in the policy or procedures for verifying and approving federal agency applications and enrollment as well as weakness throughout the process from approval to the actual transfer of the items to the agencies, which indicates that DLA has not examined potential risks for all stages of the process. Standards for Internal Control in the Federal Government note that management should remediate identified internal control deficiencies on a timely basis, assess fraud risk by considering the potential for fraud when identifying, analyzing, and responding to risks, and analyze and respond to identified fraud risks so that they are effectively mitigated. Additionally, according to GAO’s Fraud Risk Framework, effective fraud risk managers collect and analyze data on identified fraud schemes, use these lessons learned to improve fraud risk management activities, and plan and conduct fraud risk assessments that are tailored to their programs. The framework states there is no universally accepted approach for conducting fraud risk assessments since circumstances among programs vary. However, per leading practices, assessing fraud risks generally involves five actions: (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and effect of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the program’s fraud risk profile. In conducting the fraud risk assessment, the framework identifies that managers should develop and document an antifraud strategy which describes, among other things, existing fraud control activities as well as any new control activities a program may adopt to address residual fraud risks. The DLA Office of Inspector General has an investigation ongoing, but DLA Office of Inspector General officials told us that a number of internal control weaknesses were identified and several recommendations were made to DLA that if implemented could help to mitigate future potential fraud risks. As such, DLA has begun to examine some risk associated with the LESO program. During our March 2017 meeting with DLA officials, they acknowledged that they have not conducted a fraud risk assessment on the LESO program, to include the application process, and as such, has not designed or implemented a strategy with specific control activities to mitigate risks to the program. Conducting such an assessment could have program-wide improvements, including strengthening the controls to verify the legitimacy of state and local law enforcement agencies. If DLA conducted a fraud risk assessment on the LESO program, to include the application process, and designed and implemented a strategy with specific internal control activities to mitigate assessed fraud risks, DLA would be more effective in preventing, detecting, and responding to potential fraud and security risks. The National Defense Authorization Act for Fiscal Year 2016 included a provision for DOD to create and maintain a publicly available Internet site that provides information on the controlled property transferred and the recipients of such property. DOD was required to include all publicly accessible unclassified information pertaining to the request, transfer, denial, and repossession of controlled property, among other items, on the website. DLA maintains information on the controlled and non-controlled items on the LESO program homepage and has links to Excel documents about its property transfers. The property transfer lists, which date back to 1991, are updated quarterly according to LESO officials, and include information about the transfer of all excess property transferred to federal, state, and local law enforcement agencies. In September 2016, in response to the statutory requirement, LESO officials added the following information to their LESO program homepage, and plan to include this information on the LESO program homepage for future property transfers: Pending transfer requests for property reclassified as controlled property by the Law Enforcement Equipment Working Group, pursuant to Executive Order 13688; Shipments (transfers), including non-controlled and controlled property, including justification language submitted by the law enforcement agencies; and Cancellations, including reasons for denial, broken out by three categories: state coordinator, LESO headquarters, or system denial. During the course of our audit work, we determined that the information on DLA’s Internet site did not distinguish between controlled versus non- controlled items. Specifically, DLA’s information on its Internet site did not distinguish for the general public which items were considered controlled versus non-controlled property because the information was not displayed in a transparent format that is clearly understandable by the general public. DLA provided the demilitarization codes, which are used to identify controlled and non-controlled items, but the general public would need to have an understanding of demilitarization codes to identify which items were controlled based on those codes. Furthermore, as of March 2017, DLA’s Internet site did not provide a definition to explain that property with demilitarization code B, for example, is considered controlled whereas property with demilitarization code A is considered non-controlled. However, after we briefed DLA officials in April 2017 on the results of our audit, DLA officials took immediate action and added a definition of controlled property to their Internet site to distinguish for the general public what items are considered controlled. DLA transfers excess controlled property to thousands of federal, state, and local law enforcement agencies that request approval to participate in the LESO program. DLA has taken some actions and plans additional actions to address identified weaknesses in its excess controlled property program, including changes in program policy and providing training. However, our investigators tested the LESO program’s internal controls by creating a fictitious agency allowing us to gain access to the program and to obtain over 100 controlled property items valued at about $1.2 million. DLA’s internal controls were not adequate to prevent the approval of a fraudulent application and DLA has not reviewed and revised the policy or procedures for verifying and approving federal agency applications and enrollment in the LESO program. Without reviewing and revising the internal controls in policy or procedures for verifying and approving federal agency applications and enrollment in the LESO program, DLA and LESO management will lack reasonable assurance of the legitimacy of applicants before transferring valuable, and in some cases potentially lethal, controlled property. Moreover, our investigative work found DLA has not ensured compliance that officials at DLA Disposition Services’ sites routinely request and verify valid identification of the individual(s) authorized to pick up allocated property from the LESO program. Without improving internal controls, DLA will lack reasonable assurance that its Disposition Services on-site officials are transferring controlled property to authorized individuals. Controlled items in the wrong hands—items such as simulated rifles and pipe bomb trainers—could result in criminal activities, including terrorism or illegal sale or transfer of items. Additionally, we found that on-site officials did not verify the quantity of approved items prior to transfer. If DLA does not issue guidance that requires DLA Disposition Services on-site officials to verify the type and quantity of approved items against the actual items being transferred prior to removal from the sites, then DLA will lack reasonable assurance that the approved items transferred are appropriately reflected in their inventory records. Correct accounting, according to DLA guidance, for all excess property by DLA Disposition Services’ sites is critical as non-compliance can result in property being misappropriated with potentially severe consequences. Finally, we found that DLA lacks a comprehensive framework for instituting fraud prevention and mitigation measures that would allow it to examine potential risks for all stages of the process from application to transfer of excess controlled property to legitimate law enforcement agencies. If DLA conducted a fraud risk assessment for all stages of the process, DLA would be more effective in preventing, detecting, and responding to potential fraud and security risks. We are making four recommendations to enhance the department’s transfer of its excess controlled property. To strengthen LESO program internal controls for the application and enrollment of federal agencies, we recommend the Under Secretary of Defense for Acquisition, Technology and Logistics direct the Director of DLA to review and revise policy or procedures for verifying and approving federal agency applications and enrollment. For example, such steps could include LESO supervisory approval for all federal agency applications; confirmation of the application with designated points of contact at the headquarters of participating federal agencies; or visiting the location of the applying federal law enforcement agency. To help ensure controlled property is picked up by authorized individuals, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics direct the Director of DLA to ensure compliance that on-site officials responsible for the transfer of items at Disposition Services’ sites request and verify valid identification of the individual(s) authorized to pick up allocated property from the LESO program. To help ensure the accurate quantity of approved items is transferred, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics direct the Director of DLA to issue guidance that requires DLA Disposition Services on-site officials to verify the type and quantity of approved items against the actual items being transferred prior to removal from the sites. To strengthen LESO program internal controls, we recommend that the Undersecretary of Defense for Acquisition, Technology, and Logistics direct the Director of DLA to conduct a fraud risk assessment to design and implement a strategy with specific internal control activities to mitigate assessed fraud risks for all stages relating to LESO’s transfer of excess controlled property to law enforcement agencies, consistent with leading practices provided in GAO’s Fraud Risk Framework. We provided a draft of this report to the Department of Defense (DOD) for review and comment, and written comments are reproduced in Appendix IX. DOD concurred with all four recommendations and highlighted the actions it was taking to address each recommendation. Regarding the first recommendation, DOD stated DLA had reviewed and revised the procedures for verifying and approving federal agency applications and now requires federal agency headquarters to assign a point of contact and sign a memorandum of understanding (MOU). In addition, DOD noted DLA is updating policy to reflect the revised procedural changes. In regards to the second and third recommendations, while DLA has policies requiring on-site officials to request and verify identification from all customers and to verify the type and quantity of approved items being transferred prior to removal from sites, DOD stated DLA will conduct additional training on the processes to all DLA Disposition Services Field sites by October 1, 2017. Regarding our fourth recommendation, DOD noted DLA will conduct a fraud risk assessment and implement a strategy to mitigate assessed fraud risks by April 1, 2018. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Director, Defense Logistics Agency; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Zina Merritt at (202) 512-5257 or merrittz@gao.gov or Wayne McElrath at (202) 512-2905 or mcelrathw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix X. Federal, state, and local law enforcement agencies reported various uses and benefits from the receipt of DOD’s excess controlled property through the Law Enforcement Support Office (LESO) program. Federal law enforcement agencies and state coordinators in our survey— as well as officials we interviewed from federal, state, and local law enforcement agencies—reported various uses of DOD excess controlled property for law enforcement activities. The reported uses included enhancing counterdrug, counterterrorism, and border-security activities. Also, law enforcement agencies reported using DOD’s excess controlled property for other law enforcement activities, such as search and rescue, natural disaster response, surveillance, reaching barricaded suspects, police training, and the serving of warrants. For example, the Bureau of Indian Affairs reported they have used vehicles to support its Office of Justice Services’ drug unit during marijuana eradication and border operations by providing transport to agents over inhospitable terrain in mountainous and desert environments. Also, Texas law enforcement officials reported that the San Marcos and Hays County police departments used their issued Mine Resistant Ambush Protected (MRAP) vehicles to rescue more than 600 stranded people from floodwaters in October 2015. In another example, the Los Angeles County Sheriff’s Department reported that it used a robot to remove a rifle from an attempted murder suspect who had barricaded himself. Table 4 includes additional examples reported to us on the use of excess controlled property. Federal, state, and local law enforcement agencies and state coordinators also reported various benefits from receiving DOD excess controlled property through the LESO program. The benefits were reported in survey results and identified through our case studies. Table 5 provides examples of the reported benefits. This report addresses the extent to which the Defense Logistics Agency (DLA) has: (1) taken actions to enhance processes, including internal controls, relating to its transfers of excess controlled property; and (2) addressed the statutory requirement to maintain a public Internet site that provides transparency about controlled property transfers and about the recipients of such property. We also include survey and case study information collected between April 2016 and October 2016 on how federal, state, and local law enforcement agencies reported using and benefiting from excess controlled property transferred to them through DLA’s Law Enforcement Support Office (LESO) program in accordance with the purposes of the program, including enhancement of counterdrug, counterterrorism, and border-security activities in appendix I. For the report, we relied on the Department of Defense (DOD) definition of controlled property as outlined in DLA and LESO program policy and guidance. We also confirmed the definition with LESO program officials. For objective one, we reviewed DLA and LESO program policy and guidance on LESO program processes for transferring controlled property, including DLA instructions, LESO program standard operating procedures, and memorandums of agreement between LESO and participating states, which set forth the terms and conditions of transfer, monitoring, training, accountability, and disposal of controlled property obtained through the LESO program. In addition, we reviewed Executive Order 13688, Federal Support for Local Law Enforcement Equipment Acquisition (Jan. 16, 2015) and interviewed members from the permanent Federal Interagency Law Enforcement Equipment Working Group regarding additional federal requirements for participating law enforcement agencies to obtain specific types of controlled property. We compared the additional federal requirements in the Executive Order to DLA policy, guidance, and processes to gain an understanding of how DLA has incorporated and implemented such requirements. We also reviewed DOD policy, including DLA Instruction 4140.11, Department of Defense 1033 Program (December 22, 2016), and prior issuances, to gain an understanding of policy, responsibility, and procedures regarding the administration, management, oversight and implementation of the department’s LESO program. We reviewed the LESO program standard operating procedures, which outline legislative, policy, and procedural guidance; program eligibility criteria; requisitioning procedures; property accountability; property transfers and the return process; program compliance reviews; annual inventories; and training; as well as guidance specific to aircraft, watercraft, tactical vehicles, and weapons. Further, we reviewed the memorandums of agreement, between LESO and participating LESO state coordinators, which outlines the general terms and conditions that each participating state agree to regarding the management, oversight, and implementation of the LESO program to participating law enforcement agencies within the state. We analyzed DLA Electronic Freedom of Information Act Library data from calendar years 2013, 2014, and 2015 to gain an understanding of the controlled property that was transferred to federal, state, and local law enforcement agencies. Additionally, we requested and analyzed data from DLA’s automated information system on controlled property transferred to federal, state, and local law enforcement agencies. To assess the data, we interviewed relevant DLA and other agency officials who have direct knowledge of the LESO program about the steps taken to ensure the quality and accuracy of data. We determined that the data were sufficiently reliable for the purposes of our methodology as well as for background and context purposes. We tested the department’s internal controls and control activities related to LESO program enrollment and application after identifying a case of an unauthorized or ineligible agency gaining access to the LESO program and being awarded controlled property early in our review. Our investigators posed as a federal law enforcement agency and, using publicly available resources, created a fictitious website describing that agency’s activities. We completed the application paperwork, submitted it to LESO officials, and corresponded by email to answer follow-up questions. We provided a fictitious statute as a means to legitimize our agency, were approved to participate in the program, and given access to the LESO program systems. We reviewed available controlled property and submitted requests for a variety of items located at four Disposition Service sites. After our requests for controlled property were approved, we corresponded with officials at the Disposition Service sites to arrange for pickup of the property. Our investigators visited three eastern U.S. Disposition Service sites, presented the appropriate paperwork, and obtained possession of the controlled property items. We also compared DLA and LESO practices to those identified in GAO’s A Framework for Managing Fraud Risks in Federal Programs (hereafter cited as the Fraud Risk Framework). The Fraud Risk Framework has the following components: commit to combating fraud, assess fraud risk, design and implement a strategy for mitigating risk, and evaluate outcomes. We selected leading practices from the component of assess fraud risk because the use of these practices could be objectively verified. Issued in July 2015, GAO’s Fraud Risk Framework is a comprehensive set of leading practices that serves as a guide for program managers to use when developing efforts to combat fraud in a strategic, risk-based manner. The framework describes leading practices for (1) establishing an organizational structure and culture that are conducive to fraud risk management; (2) assessing the likelihood and effect of fraud risks; (3) developing, documenting, and communicating an antifraud strategy, focusing on preventive control activities; and (4) collecting and analyzing data from reporting mechanisms and instances of detected fraud for real- time monitoring of fraud trends, and use the results of monitoring, evaluations, and investigations to improve fraud prevention, detection, and response. Additionally, we conducted two surveys—one with federal law enforcement agencies that were major recipients of LESO controlled property and the other with state coordinators. First, for federal law enforcement agencies, we selected the top four federal departments whose law enforcement agencies had received controlled property from the LESO program during calendar years 2013, 2014, and 2015. These departments were the U.S. Department of Justice, U.S. Department of Homeland Security, U.S. Department of the Interior, and the United States Department of Agriculture. This accounted for approximately 99 percent of both the total initial acquisition value and the quantity of controlled property distributed to federal law enforcement agencies from calendar years 2013 through 2015. To gain an understanding of how federal law enforcement agency headquarters manage and oversee the LESO program, we developed and distributed a survey to the responsible officials at the headquarter level of all 15 law enforcement agencies within the four selected departments that received DOD-controlled property from calendar years 2013 through 2015. The selected agencies were: Bureau of Alcohol, Tobacco, Firearms and Explosives, U.S. Drug Enforcement Administration, U.S. Department of Justice; Federal Bureau of Investigation, U.S. Department of Justice; Federal Bureau of Prisons, U.S. Department of Justice; Federal Protective Service, U.S. Department of Homeland Security; Transportation Security Administration, U.S. Department of Homeland Bureau of Indian Affairs, U.S. Department of the Interior Bureau of Land Management, U.S. Department of the Interior U.S. Customs and Border Protection, U.S. Department of Homeland U.S. Fish and Wildlife Service, U.S. Department of the Interior U.S. Forest Service, United States Department of Agriculture U.S. Immigration and Customs Enforcement, U.S. Department of U.S. Marshals Service, U.S. Department of Justice; U.S. National Park Service, U.S. Department of the Interior; and U.S. Secret Service, U.S. Department of Homeland Security. The survey asked about the excess control property program’s accountability, policy and guidance, and the requests and justifications made for excess property. We worked with a survey specialist, a communications analyst, and subject matter experts from LESO to develop this survey. To ensure that the questions were clear, comprehensible and technically correct, we conducted one expert review of our draft survey with LESO officials, and one pre-test of our draft survey with federal headquarters staff from the Bureau of Alcohol, Tobacco, Firearms and Explosives, Department of Justice. During the pre-test, which was conducted in person, we read the instructions and each survey question aloud and asked the Bureau of Alcohol, Tobacco, Firearms and Explosives officials to tell us how they interpreted the question. We then discussed the instructions and questions with officials to identify any problems and potential solutions by determining whether (1) the instructions and questions were clear and unambiguous, (2) the terms we used were accurate, (3) the survey was unbiased, (4) the survey did not place an undue burden on the officials completing it. We noted any potential problems and modified the survey based on feedback from the expert reviewers and pre-tests, as appropriate. We sent an email to selected federal agency headquarters beginning on September 1, 2016, notifying them about the topics of our survey and when we expected to send the survey. We sent the self- administered Microsoft Word form and a cover email to 15 federal agency headquarters on September 6, 2016, and asked them to complete the survey and email it back to us within 2 weeks. We closed the survey on October 31, 2016. We received 13 completed responses for an overall response rate of 87 percent. To gain an understanding of how state coordinators manage the LESO program within their state, we developed and distributed a survey to the 53 state coordinators participating in the program, including the 49 states and the territories of Guam, Northern Marianas Islands, Puerto Rico, and the U.S. Virgin Islands that participate in the program. For example, our survey questions were focused on basic background information, LESO policies and training, process and accountability of the property received, and ways in which controlled property was used by law enforcement agencies. We worked with a survey specialist and a communications analyst to develop the survey. To ensure that the questions were clear, comprehensible and technically correct, we conducted four pre-tests of our draft survey with state coordinators and state points-of-contacts from four states. During the pre-tests conducted by teleconference, we read the instructions and each survey question aloud and asked the state coordinators and state points of contact to tell us how they interpreted the question. We then discussed the instructions and questions with officials to identify any problems and potential solutions by determining whether (1) the instructions and questions were clear and unambiguous, (2) the terms we used were accurate, (3) the survey was unbiased, (4) the survey did not place an undue burden on the officials completing it. We noted any potential problems and modified the survey as appropriate. We sent the self-administered Microsoft Word form and a cover email to the state coordinators on September 20, 2016, and asked them or their designated state points of contact to complete the survey and email it back to us within 2 weeks. We closed the survey on October 31, 2016. We received 50 completed responses for an overall response rate of 94 percent. The practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps in the development of the survey, the data collection, and the data analysis to minimize these non- sampling errors and help ensure the accuracy of the answers that were obtained. For example, a survey specialist designed the survey, in collaboration with our staff who have subject matter expertise. Then, as noted earlier, the draft surveys were pre-tested to ensure that questions were relevant, clearly stated, and easy to comprehend, and in the case of the federal agency survey we conducted an expert review. Data were manually extracted from the Microsoft Word form into an Excel spreadsheet and that data entry accuracy was verified. We examined the survey results and performed analyses to identify inconsistencies and other indications of error, and addressed such issues as necessary. Quantitative data analyses and a review of open-ended responses were conducted by our staff who have subject matter expertise. Results of select survey questions can be found in Appendix I, IV and V. Further, we conducted non-generalizable case studies of five states: Arizona, Georgia, Maryland, Michigan, and Texas. We selected these states based on quantity, type, and initial acquisition value of controlled property received during calendar years 2013, 2014, and 2015 as well as geographic dispersion. We selected these calendar years because they were the last three complete years prior to our audit work. First, for each state, we met and interviewed the state coordinator and when applicable, each state’s point of contact(s), to discuss roles and responsibilities in managing and overseeing the LESO program within each state. Second, we selected at least one federal, state, local, and university law enforcement agency within each case study state. To help ensure that we obtained the input of a broad range of law enforcement agencies, we selected specific agencies for our case study based on the size, type, and location of the agency, how much controlled property was received by quantity and initial acquisition value, as well as specific types of controlled property during calendar years 2013, 2014, and 2015. Selected law enforcement agencies accounted for large and small percentages as well as different types of controlled property received within each state. For example, we selected law enforcement agencies that received weapons, tactical vehicles, and aircraft, as well as night-vision equipment and other miscellaneous items. We met with law enforcement officials from the selected federal, state, local, and university law enforcement agencies to discuss the LESO program and to gain an understanding of the transfer process, including how they screen for, obtain, and dispose of DOD excess controlled property. Further, we reviewed LESO’s program policy to gain an understanding of how LESO ensures accountability of controlled property through an annual inventory and certification process and to gain an understanding of the program compliance review process in which LESO officials visit select law enforcement agencies within each state to verify all controlled property. We accompanied a LESO performance compliance review team as its members conducted their review in the annual program compliance review in Georgia. We attended the LESO-led in-brief and out-brief with the Georgia state coordinator and his team, as well as accompanied them to seven law enforcement agencies in Georgia to physically verify the serial numbers of controlled property. Additionally, we also analyzed survey responses, as previously discussed, from federal law enforcement agencies and state coordinators regarding DLA’s processes for transferring controlled property and training on LESO program policies and processes. We interviewed officials from DLA Disposition Services, who have authority over the LESO program, as well as officials from LESO headquarters who manage the program, to gain an understanding of LESO program policies and processes for transferring its excess controlled property to law enforcement agencies, including past and planned program enhancements. We also interviewed these officials to gain an understanding of how law enforcement agencies are trained on LESO program policies and procedures. We also met with officials from select law enforcement agencies, as previously discussed, to gain an understanding of LESO program processes, including how they screen for, obtain, and dispose of DLA excess controlled property, enhancements made to the program, and how they are trained on LESO program policies and processes. We selected these law enforcement agencies based on a number of factors, including range of quantity of items, total acquisition value, and item type. We reviewed training materials provided by LESO and attended the 15th annual training seminar provided to state coordinators. Finally, we visited two Disposition Service sites in the United States to observe their processes for disposing of excess property received from the military services. We selected the two Disposition Service sites based on geographic location and personnel availability. For objective two, we reviewed the statute requiring DOD to develop and maintain an Internet site that provides information on the controlled property transferred to gain an understanding of the statutory requirements regarding the contents of the website, such as to include all publicly accessible unclassified information pertaining to the request, transfer, denial, and repossession of controlled property, among other items. Additionally, we analyzed the capabilities of the DLA website, including the fields it contained and the searches that can be performed using it. We compared the information in and capabilities of the website with the statutory requirements to provide publicly available information on controlled property transferred and the recipients of such property in a transparent manner. We also interviewed officials from LESO headquarters to obtain updates on the status of DOD’s implementation of the Internet site. Also, appendix I of this report includes survey and case study information collected between April 2016 and October 2016 on how federal, state, and local law enforcement agencies reported using and benefiting from excess controlled property transferred to them through DLA’s LESO program in accordance with the purposes of the program, including enhancement of counterdrug, counterterrorism, and border-security activities. Additionally, we analyzed survey responses pertaining to the reported use of controlled property. For each case study, we interviewed law enforcement officials from federal, state, and local law enforcement agencies to discuss the transfer process and how controlled property transferred to them through the LESO program is used by their law enforcement agencies, including whether it had enhanced their counterdrug, counterterrorism, and/or border-security operations. The manner in which law enforcement agencies used controlled property items was self-reported, and we have made no assessment of the agencies’ reported use. Table 6 lists the offices that we visited or contacted during our review. We conducted this performance audit from January 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with investigative standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. Since 1989, the Department of Defense (DOD) has been authorized to undertake actions intended to enhance the effectiveness of domestic law enforcement agencies through direct or material support. Table 7 includes legislative actions and key dates in the history of the LESO program. Survey results from select federal law enforcement agencies and LESO program state coordinators as well as interviews with law enforcement agencies in our case studies, between April 2016 and October 2016, identified that not all participating agency personnel have received training on all aspects of the LESO program, including its policies. In our survey of select federal law enforcement agencies, training had not been regularly provided to participating federal law enforcement agencies. For example, 3 of the 13 respondents to the federal survey reported that their agency had received training from LESO; the remaining 10 respondents to the federal survey stated that either they did not receive training from LESO or they did not know if their agency had received any training from LESO regarding the LESO program. LESO officials told us that they have not regularly provided training to federal law enforcement agencies in the past, with training mainly provided to the state coordinators participating in the LESO program. Survey results of federal law enforcement agencies also showed that officials generally stated that training on the LESO program would be beneficial. For example, 9 of the 13 respondents to the federal survey stated that refresher training provided by LESO would be beneficial to their agency. In addition, officials from federal agencies’ field offices in our case studies generally stated that training provided by LESO would be beneficial to them in participating in the LESO program and that they wanted more training to better understand, for example, LESO program processes, such as the turn-in or transfer of controlled property. Officials from federal field offices in our case studies also generally stated that they were mostly self-taught on the LESO program. According to LESO officials, LESO funds and provides an annual training seminar that includes training on LESO policies and procedures for state coordinators. LESO officials stated that as a part of this annual training they direct state coordinators to train participating law enforcement agencies, and state coordinators have discretion to establish their own training. However, our survey results showed that nearly three-fourths of the state coordinators reported that they do not provide mandatory training on LESO program policies and procedures to law enforcement agencies. Also, we found that state coordinators varied in the types of training they provided on LESO program policies and procedures to law enforcement agencies in their state, as shown in table 8. For example, our survey found that 40 percent (18 of 45) of responding state coordinators reported that they do not provide in-person refresher or annual training and 15 percent (7 of 46 responding to the question) reported that they do not provide training aids or reference aids (i.e., PowerPoint format). The majority of state coordinators reported they found LESO training “helpful”, as shown in table 9. However, the majority of state coordinators also reported they would find LESO training modules helpful, as shown in table 10. Moreover, officials from state and local law enforcement agencies in our case studies reported different experiences about the availability and accessibility of training on policies and procedures of the LESO program from their state coordinators and stated that they would benefit from additional training on policies and procedures, such as on returning property to DLA. For example, an official from one law enforcement agency in our case study told us that it took 8 months to receive training from his state coordinator upon joining the program. In another example, an official stated that he received little formal training from his state coordinator or from LESO officials; rather, he was trained by his predecessor when he was assigned to manage the LESO program for his law enforcement agency. In contrast, for example, an official from another law enforcement agency stated that he attended mandatory training with his state coordinator upon joining the LESO program to learn how to set up an account and screen for items and that his state coordinator is responsive when questions arise. As noted in this report, DOD is enhancing its processes for the transfer of excess property by developing additional training for participating law enforcement agencies on LESO program policies and procedures by establishing an online training tool. According to DLA officials, the online training tool will assist in providing specific information and training modules on LESO program policies and procedures to federal law enforcement agencies, and state coordinators can provide the training to law enforcement agencies in their states. DLA officials estimated the training tool would be completed in mid-2017. In our survey of 15 federal law enforcement agencies, completed in October 2016, we found that the majority (11 of 13) stated that their agency either had no memorandum of understanding (MOU) or did not know if their agency had a MOU with the Department of Defense (DOD) regarding the LESO program. Also, the majority (11 of 13) reported that the LESO program office had not provided, or they did not know if the LESO program office had provided, any policy or guidance to their agency on program roles and responsibilities regarding the LESO program, as shown in Table 11. Moreover, the majority (7 of 13) reported that their agency did not have any standard operating procedures, or standard practices outlined in policy or guidance that apply to DOD LESO-controlled property, as shown in Table 11. The majority of the federal survey respondents stated that their agency had not provided any policy or guidance, or training, on topics related to the LESO program to their field locations that use the program. Table 12 shows the federal survey respondents and whether or not their agency provided policy or guidance, or training was provided on the listed topics. Additionally, tables 13, 14, and 15 provide survey results regarding federal law enforcement agency interactions with LESO, whether their agency had a process for requesting and obtaining controlled property, and their familiarity with the LESO program’s processes for transferring controlled items. Figure 4 shows the application form on LESO’s website for federal law enforcement agencies. Figure 5 shows the application on LESO’s website for state and local law enforcement agencies. Figure 6 shows the 2016 version of the application on LESO’s website for law enforcement agencies. In addition to the contacts named above, Marilyn Wasleski, Gary Bianchi, and Helena Wong (Assistant Directors), Laura Czohara (Analyst-in- Charge), Martin de Alteriis, Robert Graves, Pamela Harris, Jason Kelly, Amie Lesser, Barbara Lewis, Felicia Lopez, Maria McMullen, George Ogilvie, Richard Powelson, Ray Rodriguez, Martin Wilson, and Samuel Woo made key contributions to this report. GAO, Excess Personal Property: DOD Should Reassess the Priorities of Its Disposal Process. GAO-16-44. Washington, D.C.: January 29, 2016. DOD Excess Property: Control Breakdowns Present Significant Security Risk and Continuing Waste and Inefficiency. GAO-06-943. Washington, D.C.: July 25, 2006. DOD Excess Property: Control Breakdowns Present Significant Security Risk and Continuing Waste and Inefficiency. GAO-06-981T. Washington, D.C.: July 25, 2006. DOD Excess Property: Management Control Breakdowns Result in Substantial Waste and Inefficiency. GAO-05-729T. Washington, D.C.: June 7, 2005. DOD Excess Property: Management Control Breakdowns Result in Substantial Waste and Inefficiency. GAO-05-277. Washington, D.C.: May 13, 2005. Defense Inventory: Control Weaknesses Leave Restricted and Hazardous Excess Property Vulnerable to Improper Use, Loss, and Theft. GAO-02-75. Washington, D.C.: January 25, 2002. | Since 1991, DOD has reported transferring more than $6 billion worth of its excess controlled and non-controlled personal property to more than 8,600 federal, state, and local law enforcement agencies through the LESO program, which is managed by DLA. According to DOD, about 4 to 7 percent of the total excess property transferred is controlled property, which typically involves sensitive equipment and items that cannot be released to the public. The National Defense Authorization Act of 2016 included a provision that GAO conduct an assessment of DOD's excess property program. This report addresses the extent to which (1) DLA has taken actions to enhance processes, including internal controls, related to its transfers of excess controlled property; and (2) DLA has addressed the statutory requirement to maintain a public Internet site that provides transparency about controlled property transfers and about the recipients of such property. GAO reviewed DOD policies and procedures, interviewed cognizant officials, and conducted independent testing of LESO's application and DLA's transfer process. The Defense Logistics Agency (DLA) has taken some actions and is planning additional actions to address identified weaknesses in its excess controlled property program. However, internal control deficiencies exist for, among other things, ensuring that only eligible applicants are approved to participate in the Law Enforcement Support Office (LESO) program and receive transfers of excess controlled property. DLA is establishing memorandums of understanding with participating federal agencies intended to, among other things, establish general terms and conditions for participation, revise its program application to require additional prospective participant information, and plans to provide additional online training for participating agencies that is expected to begin in late 2017. However, GAO created a fictitious federal agency to conduct independent testing of the LESO program's internal controls and DLA's transfer of controlled property to law enforcement agencies. Through the testing, GAO gained access to the LESO program and obtained over 100 controlled items with an estimated value of $1.2 million, including night-vision goggles, simulated rifles, and simulated pipe bombs, which could be potentially lethal items if modified with commercially available items (see photos). GAO's testing identified that DLA has deficiencies in the processes for verification and approval of federal law enforcement agency applications and in the transfer of controlled property, such as DLA personnel not routinely requesting and verifying identification of individuals picking up controlled property or verifying the quantity of approved items prior to transfer. Further, GAO found that DLA has not conducted a fraud risk assessment on the LESO program, including the application process. Without strengthening DLA and LESO program internal controls over the approval and transfer of controlled property to law enforcement agencies, such as reviewing and revising policy or procedures for verifying and approving federal agency applications and enrollment, DLA lacks reasonable assurance that it has the ability to prevent, detect, and respond to potential fraud and minimize associated security risks. Examples of Controlled Property Items Obtained DLA maintains a public Internet site to address statutory requirements to provide information on all property transfers to law enforcement agencies. DLA's public Internet site shows all transferred property, and, as of April 2017, in response to GAO's findings, has included a definition of controlled property to distinguish for the general public what items are considered controlled. GAO is making four recommendations to DLA, including strengthening internal controls over the approval and transfer of DOD excess controlled property to law enforcement agencies, and conducting a fraud risk assessment to institute comprehensive fraud prevention and mitigation measures. DOD concurred with all four recommendations and highlighted actions to address each one. |
In 1941, President Roosevelt ordered all federal agencies to include in their wartime contracts a provision prohibiting contractors from discriminating against any worker because of race, color, creed, or national origin. President Johnson expanded this principle in 1965 when he issued Executive Order 11246, which required federal contractors and subcontractors, and federally assisted construction contractors, to refrain from discrimination and to take affirmative action to provide equal employment opportunity to all employees and job applicants, regardless of race, color, religion, sex, or national origin. In the early 1970s, equal employment responsibilities were expanded by statute to persons with disabilities and certain disabled and Vietnam era veterans. (See app. I for more information on the legal authorities for OFCCP.) Established in 1966, OFCCP has seen its role evolve over time. Initially, OFCCP served as a policy-making body; using a small nationwide staff, it concentrated primarily on coordinating and monitoring enforcement, while the actual day-to-day enforcement responsibilities were scattered among other federal agencies. In 1978, enforcement responsibilities were transferred from the various federal agencies to OFCCP in order to consolidate activities and improve the efficiency and effectiveness of the investigations. Since then, OFCCP has been primarily responsible for ensuring the compliance of federal contractors, subcontractors, and federally assisted construction contractors with their affirmative action and equal opportunity responsibilities. Today, OFCCP operates with a budget of about $59 million and is authorized for 825 full-time-equivalent (FTE) staff positions. OFCCP’s national office in Washington, D.C., directs the nationwide enforcement of equal employment opportunity laws and regulations among federal contractors. Field staff in OFCCP’s 10 regional offices and 57 district and area offices conduct the actual enforcement activities. These include reviewing federal contractors’ compliance with the applicable laws and regulations, conducting investigations of individual complaints, and providing technical support to federal contractors. While OFCCP monitors the employment practices of federal contractors, OFCCP is actually one of several federal agencies responsible for enforcing equal opportunity laws and regulations. The Equal Employment Opportunity Commission (EEOC), under title VII of the Civil Rights Act of 1964, as amended, investigates charges of employment discrimination because of race, color, religion, sex, or national origin. EEOC also is responsible for investigating discrimination charges in employment based on age, unequal pay, and physical and mental disabilities. There is some overlap in activities of these agencies, and EEOC and OFCCP operate under a memorandum of understanding (MOU) and coordination regulations to minimize any duplication of effort. For example, under the MOU, individual complaints to OFCCP alleging discrimination under title VII are referred to EEOC. Under the coordination regulations, OFCCP acts as EEOC’s agent in investigating charges of discrimination brought by certain persons with disabilities. In carrying out its mission and responsibilities, OFCCP focuses most of its resources on compliance reviews (see fig. 1). Through this mechanism, which includes a desk audit and a site visit in most cases, OFCCP analyzes a contractor’s hiring and employment practices. OFCCP seeks to determine if these practices comply with laws that it enforces. In most of its reviews OFCCP identifies violations, many of which are considered major. Regardless of the exact nature of the violation, OFCCP’s policy is to work with the contractor to resolve the case rather than to impose sanctions, such as canceling the federal contract. In addition to compliance reviews, OFCCP conducts complaint investigations and provides compliance support, such as technical assistance to help federal contractors understand the regulatory requirements and review process. A compliance review, which often takes between 3 and 6 months to complete, usually consists of two phases: a desk audit and a site visit. The desk audit is a systematic review of documents and materials that the contractor under review provides, explaining its efforts to ensure equal employment opportunities. As part of the desk audit, compliance officers compare the representation of women and individual minority groups in the contractor’s workforce with that of the workforces of similar federal contractors in the area, and examine the contractor’s affirmative action plan. Next, OFCCP usually conducts an on-site review at the contractor’s establishment. During this phase, compliance officers investigate potential violations identified in the desk audit, verify the contractor’s activities to implement its affirmative action program, and obtain information needed to work with the contractor to resolve any violations. Activities include inspecting the contractor’s facilities and reviewing its personnel files (see fig. 2). Compliance reviews tend to uncover violations in the vast majority of cases. OFCCP identified violations in 74 percent of its completed compliance reviews in fiscal year 1994 (see table 1), and OFCCP classified these violations as either major or minor. In 73 percent of the reviews in which violations were identified, OFCCP resolved them with conciliation agreements. Conciliation agreements are used for major violations. Many conciliation agreements address violations such as a contractor’s failure to complete a workforce utilization analysis or to correct for problems with its past performances. Some agreements do address outright discrimination, such as one case in which a compliance review uncovered a pattern of discrimination against African American applicants who had been denied jobs at a facility. In addition to the actual conciliation agreement, OFCCP may require the contractor to provide financial compensation to the individual victims of discrimination. For example, in fiscal year 1994, OFCCP reached 553 financial agreements valued at $39.6 million, and, in the case of the discrimination previously cited, the company agreed to pay over $630,000 in back wages to the 32 qualified applicants who had been denied jobs. OFCCP resolved the remaining compliance reviews with letters of commitment, which are used for minor violations such as the need to make technical corrections to a contractor’s affirmative action plan. While OFCCP emphasizes bringing contractors into compliance with the employment laws rather than penalizing them for not complying, OFCCP may recommend enforcement proceedings—that is, legal actions—if a contractor fails to resolve discrimination or affirmative action violations. Seventy-five cases were referred for enforcement in fiscal year 1994, and in one such case OFCCP found that a contractor discriminated in compensating a class of minorities and women. The contractor refused to conciliate, and OFCCP then recommended the case for enforcement. After an administrative hearing, the Secretary of Labor may order that a contract be suspended or cancelled, and the contractor may be debarred from doing business with the federal government. Debarments, however, are rare, with five contractors debarred in fiscal year 1994. In two of these cases, the contractors did not honor their conciliation agreements by failing to recruit and hire women, and by filing false reports. Enforcement resources not devoted to compliance reviews are used for complaint investigations and other support activities. OFCCP dedicates about 11 percent of its enforcement hours to investigating specific complaints of employment discrimination. OFCCP investigates cases involving groups of people or patterns of discrimination, as well as individual or group complaints filed under the disability and veterans’ laws. In fiscal year 1994, OFCCP completed 802 complaint investigations and found violations in 19 percent of the cases. OFCCP devoted the remainder of its enforcement resources—about 10 percent—to various support activities. Staff give technical assistance, such as advising contractors on how to meet their equal employment opportunity obligations. OFCCP provides this assistance by answering individual questions and sponsoring seminars on OFCCP policies and regulations. OFCCP staff also spend time supporting litigation efforts and completing other activities, such as (1) linking contractors to specific community recruitment and training resources that can help fill workforce deficiencies and (2) reviewing periodic progress reports required by agreements reached during compliance reviews. In fiscal year 1989, OFCCP’s staff size was larger than it had been since the early 1980s, and the agency completed a record number of compliance reviews. By fiscal year 1994, OFCCP’s budget had decreased by 9 percent in real dollars (see table 2). As OFCCP’s budget decreased in real terms, so did the size of its staff. From fiscal year 1989 to fiscal year 1994, OFCCP’s total FTE staff decreased by 15 percent, from 970 to 820 (see fig. 3). Moreover, the actual number of compliance officers working at OFCCP decreased by 33 percent and has been below the authorized level since fiscal year 1990, primarily because of attrition and hiring freezes. During this time the number of completed compliance reviews decreased by 33 percent, from 6,232 to 4,179. OFCCP officials explained that part of this decline was due to the decrease in OFCCP’s funding and staff levels, as well as a changing emphasis from reviewing a single establishment to undertaking more labor intensive lengthy reviews such as corporate management reviews and construction mega-project reviews. The number of complaint investigations, in which OFCCP reacts to specific complaints filed by a person or persons, also decreased by 39 percent during this period. This drop, from 1,321 to 802 (see fig. 4), was due in large part to a reduction in the number of complaints actually received by OFCCP, according to OFCCP officials. One of the procedures OFCCP uses to initially identify contractors for compliance reviews may not lead to appropriate targeting of contractors. Because OFCCP aggregates data pertaining to all minority groups in a company during its initial selection stages, rather than focusing on data pertaining to each minority group separately, it could overlook companies that discriminate against one or more particular minority groups. Contractors are required to report on the race, ethnicity, and sex of their workforce in each of nine occupational categories. OFCCP then uses these data as part of its process to determine which contractors should be targeted for compliance reviews. This includes comparing the percentage of all minorities and the percentage of women in a contractor’s workforce to that of all other federal contractors in similar industries and geographic areas. In completing these comparisons, OFCCP combines the data pertaining to all minorities because, according to OFCCP officials, the aggregated data provide a large enough number of observations for a statistically valid analysis. Aggregated data may conceal possible discrimination against specific minority groups. For example, if 30 percent of a contractor’s workforce is composed of minorities, and this percentage mirrors the average minority employment for all similar federal contractors in the area, then the contractor is not as likely to be targeted for review. However, assume that all 30 percent of the contractor’s minority workforce are Hispanic when the workforces of similar federal contractors in the area are 15 percent Hispanic and 15 percent African American. While this imbalance in the racial composition of the contractor’s workforce indicates that the contractor may be discriminating against African Americans, under OFCCP’s current practice of aggregating the data, the contractor may not be identified for a compliance review. OFCCP officials acknowledge that this type of discrimination could occur and that some areas have large enough minority populations for statistically valid analyses. In commenting on a draft of this report, a DOL official stated that OFCCP will test the feasibility of using disaggregated data in identifying contractors for compliance reviews. Compliance reviews—the cornerstone of OFCCP’s enforcement strategy—have been successful in identifying violations in nearly three-quarters of the cases. However, the number of such reviews has decreased, as have the agency’s resources. At the same time, OFCCP has continued its practice of aggregating data when initially selecting contractors for compliance reviews, which may be inappropriate. Although firms report data by individual racial groups, OFCCP aggregates the data before making its selections, thereby losing an opportunity to target firms that may discriminate against particular racial groups. In order to reduce the likelihood of overlooking contractors that may discriminate against particular racial groups, we recommend that, in targeting contractors for review, OFCCP use existing data on individual minority groups in geographic areas where the minority populations are large enough so that statistically valid analyses can be completed. In reviewing a draft of this report, DOL and OFCCP officials concurred with our recommendation and said they planned to test its feasibility as part of OFCCP’s fiscal year 1996 efforts to revise its selection procedures. A copy of DOL’s written comments on this report is in appendix IV. OFCCP also provided oral suggestions to clarify certain technical issues, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor and the Director of OFCCP, and will make copies available to others on request. Please contact Wayne B. Upshaw, Assistant Director, or me on (202) 512-7014 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix V. Executive Order 11246: This order, issued in 1965, prohibits discrimination in hiring or employment opportunities on the basis of race, color, religion, sex, and national origin. It applies to all contractors and subcontractors holding any federal contracts, or federally assisted contracts exceeding $10,000 annually. In addition, the rules implementing the executive order require contractors and subcontractors with federal contracts of $50,000 or more and 50 or more employees to develop a written affirmative action program that identifies any problem areas in minority employment and provides in detail for specific steps to guarantee equal employment opportunity keyed to the problems. Section 503 of the Rehabilitation Act of 1973: This statute requires government contractors to take affirmative action to employ and advance in employment qualified persons with disabilities. It applies to firms with federal contracts of $10,000 or more annually. Vietnam Era Veterans’ Readjustment Assistance Act of 1974 (38 U.S.C. 4212): The affirmative action provision of this statute requires federal contractors and subcontractors to undertake affirmative action for qualified special disabled veterans and Vietnam era veterans. It applies to all federal contracts of $10,000 or more annually. Equal Employment Opportunity in Apprenticeship and Training (29 C.F.R. Part 30): This federal regulation requires equal employment opportunity and affirmative action in apprenticeship programs. It applies to all apprenticeship programs registered with the Department of Labor or with recognized state apprenticeship organizations. In addition, during the course of compliance reviews and complaint investigations, OFCCP checks for compliance with certain aspects of the Immigration Reform and Control Act of 1986 (IRCA) and the Family and Medical Leave Act of 1993 (FMLA). IRCA requires all employers to maintain a verification form pertaining to the citizenship and/or immigration status of new employees. OFCCP examines these records and reports its findings to the Immigration and Naturalization Service. FMLA requires employers to permit employees to take unpaid leave for certain family and medical reasons. Generally, any employee who takes this leave is entitled, upon return, to be restored to the same or an equivalent position without loss of benefits. OFCCP checks for compliance with this act and reports any apparent violations to the Wage and Hour Division of the Department of Labor. Authorized levels (FTEs) OFCCP divides federal contractors into two types: supply and service contractors, and contractors working on federally funded or federally assisted construction projects. Because of the differing nature of the businesses and the amount of time people are employed, OFCCP uses different data and selection criteria when selecting contractors for reviews. Once the contractor is selected, the compliance review procedures are similar, although, on average, supply and service contractor reviews require almost 3 times as many hours to complete as construction contractor reviews and cover almost 10 times as many workers (see table III.1). OFCCP’s Equal Employment Data System (EEDS) serves as the basis for selecting supply and service contractors for review. EEDS is developed from the information submitted to a joint reporting committee, which is composed of OFCCP and Equal Employment Opportunity Commission representatives, via the Employer Information Report (EEO-1). This report includes information on the race, ethnicity, and sex of employees in each of nine job categories and is to be filed annually by September 30. Federal regulations require most contractors and subcontractors with 50 or more employees and a federal contract worth more than $50,000 to file the EEO-1 form. Any establishment that serves as a depository of government funds or is a financial institution that is an issuing and paying agent for U.S. savings bonds is required to file. Also, all private sector employers with 100 or more employees are required to file regardless of whether they hold federal contracts. OFCCP policy directs that approximately 84 percent of the establishments selected for review be from a rank listing of “flagged” contractors. Using the information in EEDS, OFCCP ranks contractor establishments on the basis of each contractor’s “average utilization value” of minorities and women, with separate values calculated for each. The average utilization rate is derived by first comparing the contractor’s percentage of minorities (or women) employed in each of the nine occupational categories to the average employment of minorities (or women) for all federal contractors in the specified industry and geographic area. These nine values are then averaged to arrive at one number that is used as the average utilization value. Contractors are then ranked on the basis of their minority or female utilization value, whichever is lower. In addition, the EEDS produces a “concentration index” that is used to examine how minorities and women are distributed throughout a contractor’s workforce. In developing this index, more weight is given to those occupational categories that receive higher wages. Using these two calculations, establishments are then flagged by EEDS as appropriate candidates for further OFCCP review. A contractor is flagged when the establishment’s utilization rate is less than 80 percent of the industry average of either minorities or women, and there is a relatively high concentration of either minorities or women in lower wage occupations. Each district office then receives a listing of flagged establishments in its jurisdiction, which are then examined to determine if they are eligible to be reviewed. Contractors are eligible for review if they have not been reviewed in the past 2 years, are not under a court order resulting from equal employment opportunity legislation, and hold a current federal contract. OFCCP policy also directs that about 15 percent of the contractors reviewed are to be chosen at the discretion of OFCCP district directors. In making these discretionary selections, directors are to consider complaints and community concerns about employers, awards of large federal contracts that may increase employment opportunities, establishments that do not file required reports, expansion of employment in an industry or at specific locations, and significant reductions in employment that impact minorities or women. OFCCP does not compile statistics on the specific reasons for selecting contractors for review under the district directors’ discretion. The remaining 1 percent of compliance reviews target a randomly selected sample from EEDS. District offices are required to review the randomly selected contractor establishments with more than 100 employees unless the establishment is under a court order resulting from equal employment opportunity litigation; has been reviewed in the last 2 years; or cannot be reviewed for some reason, such as it is no longer in business. Because of the fluctuating and temporary nature of the construction industry, the Department of Labor has historically treated construction contractors separately from supply and service contractors. While those construction contractors that meet the EEO-1 filing requirements should file reports that would be contained in EEDS, the EEDS information is not used in selecting construction contractors for compliance reviews. Instead, OFCCP relies on other sources for information. OFCCP’s national office purchases listings of active construction projects in each district and area office’s jurisdiction. These listings summarize information on publicly funded construction projects compiled by F.W. Dodge, a private company that publishes construction industry information. They include the contract value and the type of construction project but not the name of the contractor or any information concerning the contractor’s employees. The district directors select “likely candidates” from this list. A district director then orders a profile sheet for each project, and this sheet includes owner and general contractor information. OFCCP staff then contact the prime contractor to obtain the names, addresses, and size of the major subcontractors, including the number of personnel and the value of the subcontracts. In selecting construction contractors for review, a district director gives first priority to contractors that have not been reviewed for the longest time, have received substantial federal or federally assisted contracts resulting in large workforces, and employ fewer minorities or women than would reasonably be expected. In addition to those named, the following individuals contributed to this report: Larry Horinko and Robert Sampson did the initial audit work; Nancy Kintner-Meyer assisted with augmenting the audit work and analysis; and Timothy Silva and James Spaulding reviewed and commented on early drafts of the report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Department of Labor's Office of Federal Contract Compliance Programs' (OFCCP) oversight of federal contractors' equal employment opportunity (EEO) practices, focusing on: (1) OFCCP fulfillment of its mission and responsibilities; (2) changes in OFCCP resources in recent years; and (3) whether the OFCCP selection procedure for contractor reviews could mask discrimination against specific minority groups. GAO found that: (1) OFCCP uses compliance reviews which compare the racial and gender composition of the contractor's workforce with those of similar federal contractors to ensure that federal contractors use nondiscriminatory employment practices; (2) when OFCCP identifies EEO violations during its compliance reviews, it resolves the violations by working with the contractors rather than imposing sanctions on the contractors; (3) OFCCP recommends enforcement proceedings only if the contractor does not correct its EEO violation; (4) OFCCP uses 11 percent and 10 percent, respectively, of its enforcement resources for complaint investigations and compliance support; (5) from 1989 to 1994, OFCCP financial and staff resources decreased 9 percent and 15 percent, respectively, and the number of compliance reviews completed decreased by 33 percent; (6) although OFCCP aggregates data on all minority employees in a given contractor's workforce during the initial selection stage of compliance reviews, it may overlook a contractor's discriminatory practices against one or more particular minority groups; and (7) OFCCP uses aggregate data to identify contractors for compliance reviews because the data produce a large enough number of observations for a statistically valid analysis. |
To respond to food aid emergencies, USAID can provide emergency food aid commodities through four general delivery processes: standard shipping, overseas prepositioning, domestic prepositioning, or diversion (see fig. 1). USAID determines the most appropriate of these four processes by analyzing the request for food aid, the nature of the emergency, and the availability of commodities and resources. USAID aims to maintain a combined total of up to 100,000 metric tons of food in its prepositioning supply chain at any given time. USAID currently maintains prepositioning warehouses in six overseas locations (Colombo, Sri Lanka; Djibouti, Djibouti; Dubai, United Arab Emirates; Durban, South Africa; Las Palmas, Spain; and Mombasa, Kenya) and two domestic locations (Jacinto Port, Texas, and Miami, Florida). In 2007, we reported that U.S. food aid delivery is generally too time consuming to be sufficiently responsive in emergencies, requiring 4 to 6 months on average, including time required for procurement and transportation of the commodities. We recommended, among other things, that USAID conduct a cost-benefit analysis of prepositioning to improve its food aid logistical planning. In 2008, USAID commissioned a cost-benefit analysis of the U.S. government’s food prepositioning activities. The commissioned analysis did not compare the timeliness of domestic and overseas prepositioning; however, it recommended that USAID consider increasing the amount of food aid prepositioned domestically, to improve its response times to critical emergency program needs. In 2013, a report by USAID’s Inspector General found that the agency had not determined whether the benefits of overseas prepositioning in the Horn of Africa outweighed the costs or whether overseas prepositioning saved time in comparison with domestic prepositioning. Prepositioning of emergency food aid reduces the average delivery time frame for USAID’s food aid shipments for WFP and other cooperating sponsors. For WFP, prepositioning food aid in overseas and domestic warehouses shortened delivery time frames by an average of almost a month. For nine of USAID’s other cooperating sponsors, prepositioning food aid in domestic and overseas warehouses shortened delivery time frames by an average of more than 2 months. In addition, diversion of emergency food aid from the prepositioning process shortened delivery time frames by, on average, about 2 months for WFP and the other sponsors. We estimated, using statistical modeling to control for various factors, that prepositioning food in overseas and domestic warehouses shortened the average delivery time frame for shipments for WFP by 28 days in fiscal years 2009 through 2012 compared with USAID’s standard shipping process. As table 1 shows, during this period, the 472 prepositioned shipments for WFP had an average delivery time frame of about 102 days. In contrast, the 1,665 standard shipments for WFP had an average delivery time frame of 135 days. According to WFP officials, USAID’s prepositioning of food in warehouses has helped WFP provide food aid more quickly. WFP officials also stated that because prepositioned food is available for immediate collection from warehouses, prepositioning helps ensure a sufficient food aid supply to meet spikes in demands due to unforeseen emergencies. Although both overseas and domestic prepositioning shortened delivery time frames for WFP, the estimated time savings were larger for overseas prepositioning. As table 1 shows, we estimated, using statistical modeling to control for various factors, that prepositioning food in overseas warehouses saved an average of 41 days compared with USAID’s standard shipping process. We found that prepositioning food in domestic warehouses saved fewer days—an estimated average of 16. We estimated, using statistical modeling to control for various factors, that prepositioning food aid commodities in overseas and domestic warehouses in fiscal years 2007 through 2012 saved an average of about 67 days for nine of USAID’s cooperating sponsors, compared with USAID’s standard shipping process. As table 2 shows, during this period, the average delivery time frame for the 141 prepositioned shipments for these cooperating sponsors was about 87 days. In contrast, the average delivery time frame for the 869 standard shipments was 161 days. Both overseas and domestic prepositioning resulted in time savings, with larger savings from overseas prepositioning. As table 2 shows, after controlling for various factors, we estimated that prepositioning food for these cooperative sponsors in overseas warehouses saved an average of 73 days relative to USAID’s standard shipping process. We also estimated that, relative to the standard shipping process, prepositioning food in domestic warehouses saved USAID an average of 62 days. While prepositioning of food aid shortens delivery time frames, it has some disadvantages, according to one cooperating sponsor’s representative whom we interviewed. For example, the representative noted that although overseas prepositioning makes commodities available for immediate collection and thus saves time, handling of prepositioned commodities by multiple parties can lead to losses due to damage during the transit from prepositioning warehouse to discharge port. In addition, prepositioned commodities may be comingled in the warehouse, making it difficult to identify infestation problems in a particular shipment of commodities. The cooperating sponsor’s representative further noted that prepositioned commodities may have to be fumigated several times if they remain in the warehouse for months; if the commodities are overexposed to the fumigation chemicals, they can no longer be used for food aid. We estimated, using statistical modeling to control for various factors, that diversion of emergency food aid from the prepositioning process shortened the average delivery time frame by about 64 days in fiscal years 2007 through 2012 compared with USAID’s standard shipping process (see table 3). During this period, the average delivery time frame for the 568 diverted shipments was about 76 days. In contrast, the average delivery time frame for the 938 standard shipments was 156 days. According to USAID officials, prepositioning of emergency food aid allows for greater flexibility to divert food assistance when necessary to meet immediate needs. The officials further stated that, because the U.S. government retains ownership of prepositioned food aid, USAID does not have to replace diverted commodities. In contrast, when food aid is shipped under the standard process, non-governmental cooperating sponsors usually take ownership of commodities before they are shipped, and depending on the availability of commodities and resources, USAID may decide to replace diverted shipments. USAID has diverted commodities before and after they left the U.S. port and at all points in the procurement, transportation, or shipping process. Our analysis shows that in fiscal years 2007 through 2012, an average of 80 days elapsed between USAID’s ordering commodities for prepositioning and diverting the commodities. The smallest number of days between USAID’s ordering and diverting commodities was 6 days, and the largest number of days was 163. Our analysis also shows that larger number of days between USAID’s ordering and diverting commodities are associated with shorter delivery time frames. USAID pays additional costs for prepositioning of emergency food aid, compared with standard shipping. For both overseas and domestic prepositioning, USAID incurs additional warehouse costs that it does not incur for standard shipments. USAID also incurs additional shipping costs due to a second leg of ocean shipping from overseas prepositioning warehouses to foreign discharge ports. Furthermore, USAID often pays higher weighted annual average prices for domestically prepositioned commodities than it does for standard shipment commodities. USAID incurs various additional costs to store prepositioned food in overseas and domestic prepositioning warehouses, depending on the quantity of food stored and the duration of the storage. In contrast, for standard shipments of emergency food aid, USAID transports the food directly from U.S. ports to recipient countries. The cost per day for food storage at USAID’s six overseas prepositioning warehouses averaged $0.25 per ton in fiscal years 2011 and 2012; the longer food is stored at the prepositioning warehouses, the higher the storage costs. In fiscal years 2011 and 2012, USAID expended approximately half of the annual $10 million authorized by the fiscal year 2008 Farm Bill for overseas warehouse costs. Warehouse costs include, in addition to food storage, storage of nonfood items (e.g., bags, pallets, and cartons) and payment of warehouse operators for handling commodities (e.g., unloading trailers and loading bulk commodities). To help ensure that its total costs for overseas prepositioning warehouses do not exceed the ceiling authorized by Congress, USAID sets annual cost targets for each warehouse that total $8.5 million. In fiscal year 2011, USAID expended approximately $5.4 million—about 54 percent of its annual ceiling—for food storage and other costs for its overseas prepositioning warehouses. In fiscal year 2012, USAID expended about $4.6 million—about 46 percent of its annual ceiling. Table 4 shows USAID’s annual targets and total expenditures for its overseas prepositioning warehouses in fiscal years 2011 and 2012. In addition, USAID expended $2.0 million in 2011 and $3.1 million in fiscal year 2012 for its domestic prepositioning warehouse in Jacinto Port. However, these expenditures did not count against the $10 million ceiling set by the 2008 Farm Bill. USAID incurs additional shipping costs when overseas prepositioning requires a second leg of ocean shipping, from the overseas prepositioning warehouses to the final discharge ports. Although the first leg of ocean shipping—from a U.S. port to the overseas warehouse—is comparable to ocean transport for standard shipping, the second leg does not exist in the standard shipping process. For example, a prepositioned shipment might undergo a first leg of ocean shipping to the Djibouti warehouse and a second leg of ocean shipping from Djibouti to Beira, Mozambique; in contrast, a standard shipment would travel directly from a U.S. port to Beira. As table 5 shows, in fiscal year 2012, 75 percent of shipments from overseas prepositioning warehouses—including 100 percent of shipments from Colombo, Las Palmas, and Lomé—traveled via ocean freight and thus involved costs for the second leg of ocean shipping. In fiscal year 2012, USAID paid a total of $13 million, averaging $143 per metric ton, for the second leg of ocean shipping for overseas prepositioned food aid. In fiscal years 2007 to 2012, USAID generally paid higher weighted annual average prices for commodities that it purchased for domestic prepositioning than for similar commodities that it purchased for standard shipping. Six commodities—corn-soy blend, cornmeal, pinto beans, vegetable oil, yellow split peas, and sorghum—accounted for 76 percent of USAID’s domestic prepositioning purchases in those years. Figure 2 shows the percentage differences between the weighted annual average prices that USAID paid per ton for domestically prepositioned and standard shipment commodities in fiscal years 2007 through 2012. In figure 2, the bars above the zero line show that USAID paid a higher weighted annual average price for domestically prepositioned commodities in 24 instances, and the bars below the zero line show that USAID paid a lower weighted annual average price in 8 instances. USAID generally did not pay higher weighted annual average prices for overseas prepositioning of these commodities during this period. (See app. IV for more information.) Many factors may have contributed to the higher weighted annual average prices paid for domestically prepositioned commodities. For example, limited commodity supply and limited numbers of suppliers for domestic prepositioning purchases are two possible factors, according to U.S. officials and commodity vendors whom we interviewed. First, if USAID purchases commodities for domestic prepositioning after commodity vendors have fulfilled their monthly orders for standard shipment and overseas prepositioning, vendors may have a reduced supply of commodities available for domestic prepositioning and may therefore charge higher prices. Second, because domestically prepositioned commodities—unlike standard shipment and overseas prepositioned commodities—are delivered to only two preselected U.S. ports, vendors for some commodities may be unwilling to compete with vendors who are geographically closer to the port and who thus can deliver the commodities to the port more cheaply. USAID has taken some steps to evaluate timeliness and costs associated with prepositioning. However, the agency does not collect and assess data needed to systematically monitor delivery time frames for prepositioned commodities to maximize time savings. Further, USAID does not systematically monitor the costs of prepositioning to maximize the program’s cost-effectiveness. According to USAID guidance, the agency should monitor programs through various activities, including collecting and analyzing data to make necessary program adjustments and to guide higher-level decision making and resource allocation. Additionally, federal internal control standards indicate that monitoring should assess the quality of performance over time and ensure that the findings of these assessments are promptly resolved. USAID has taken some steps to evaluate prepositioning of emergency food aid by examining timeliness and costs. For example, in 2008, USAID commissioned a cost-benefit analysis of prepositioning. The analysis found that prepositioning food aid reduced delivery time frames of emergency shipments from an average of 177 days for standard shipments of food aid to an average of 26 days. The analysis also found that the average cost for domestically prepositioned food aid amounted to an additional $23 per metric ton, and the average cost for overseas prepositioned food aid amounted to an additional $164 per metric ton. However, the methodology for the 2008 analysis had several limitations. First, the analysis did not include data on delivery time frames for WFP, USAID’s largest cooperating sponsor for emergency food aid. Second, the analysis included data on delivery time frames up to the date when the shipments were discharged at the foreign ports but did not include any data up to the date when the shipments arrived in the recipient countries. Third, the evaluation did not compare shipments of prepositioned food with standard shipments that had similar characteristics, such as the recipient country. Because differences in such characteristics can contribute to differences in delivery time frames, the evaluation may not have accurately isolated the effects of prepositioning. In 2009, USAID also developed a framework for prepositioning that outlines the purpose of the program. However, the framework is not up- to-date and does not guide current prepositioning practices, according to USAID officials. Additionally, the framework does not outline guidelines for evaluations on timeliness or costs. In 2013, USAID’s Inspector General found that the agency had not determined whether the benefits of overseas prepositioning in the Horn of Africa outweighed the costs and whether overseas prepositioning saved time compared with domestic prepositioning. The report recommended, among other things, that USAID conduct another independent evaluation of the cost and timeliness of prepositioning. In November 2013, USAID released a solicitation for a third-party vendor to conduct an independent evaluation of prepositioning; the contract was awarded in January 2014. According to USAID, the evaluation will be finalized in March 2014 and will include an analysis of the cost, benefits, and effectiveness of prepositioning. Although USAID’s goal for prepositioning is to improve the timeliness of emergency food aid, USAID does not collect or assess data needed to systematically monitor delivery time frames for prepositioned emergency food aid shipments. According to guidance in USAID’s Automated Directives System, the agency should monitor programs through various activities, including collecting and analyzing data to make necessary program adjustments and to guide higher-level decision making. Moreover, USAID’s evaluation policy states that programs can best manage for results by collecting and analyzing information to track progress toward planned results and by ensuring that implementing partners collect relevant monitoring data. The policy also states that monitoring can reveal whether desired results are occurring and whether implementation is on track. USAID officials told us that the agency would be able to calculate delivery time frames of prepositioned food only if its cooperating sponsors provided data on all emergency food aid shipments. According to the officials, these sponsors are responsible for tracking food shipments once the shipments leave U.S. loading ports. However, according to USAID, the terms of its agreements with its cooperating sponsors do not require these sponsors to collect or provide comprehensive data to USAID. As a result, USAID lacks the ability to determine whether prepositioning is achieving its primary goal of shortening response times or to identify possible time savings relative to standard emergency food aid shipments. In addition, some emergency food aid data that are currently available from WFP and USAID’s other cooperating sponsors have limitations that constrain their usefulness for monitoring delivery time frames. We collected data on emergency food aid shipments, including prepositioned shipments, from 19 cooperating sponsors; however, because of the following limitations, we did not include some of these data in our analysis of delivery time frames. Limitations in WFP data. We did not use data provided by WFP for our analysis of diversion’s effect on delivery time frames, because WFP’s data on emergency food aid shipments do not distinguish diversions from other standard or prepositioned shipments. WFP also was unable to provide data for years before 2009 because of changes in its data management systems. As a result, we used data provided by a freight forwarder for our analysis of WFP diversions. Limitations in some data from several other cooperating sponsors. Some data from five cooperating sponsors that received emergency food aid in fiscal years 2007 through 2012 do not distinguish emergency food aid shipments from other types of food aid shipments. In addition, some data for these five cooperating sponsors and three others were incomplete, missing some shipments and providing only partial information for others. As a result, we were unable to use some of the data from these eight sponsors. Although USAID collects data on the cost of its international food aid programs, it does not use these data to systematically monitor the total cost of prepositioning in order to maximize the program’s cost effectiveness. According to USAID’s Automated Directives System, the agency should collect and analyze data to make necessary program adjustments and to guide higher-level decision making and resource allocation. Further, federal internal controls indicate that monitoring should assess the quality of performance over time and ensure that the findings of these assessments are promptly resolved. USAID collects data on the total costs of its overseas prepositioning warehouses to ensure that the costs do not exceed the limit established by Congress. However, USAID does not analyze this information to determine whether the locations of these warehouses are cost effective. Further, the agency does not systematically or routinely monitor and analyze prepositioning commodity and shipping costs to determine the cost-effectiveness of prepositioning. USAID collects data on prepositioning commodity and shipping costs, according to agency officials. However, USAID does not store these data in a single system or spreadsheet, where they would be easily accessible for monitoring purposes. Instead, according to USAID, various offices collect these data and store them in a number of different systems or spreadsheets. In addition, agency officials told us that because prepositioning is not a separate program but is part of USAID’s emergency food aid program, the agency does not monitor prepositioning costs separately. These officials also stated that the agency does not monitor prepositioning’s total costs, because the primary goal of prepositioning is rapid response rather than financial savings. As a result of the lack of monitoring of prepositioning costs, USAID does not conduct analyses that could help it manage its resources and improve the program’s cost-effectiveness. For example, the agency does not know the causes of the commodity price differences that we identified and therefore is limited in its ability to ensure that the procurement of prepositioned commodities is conducted in a cost-effective manner. The emergency food aid that USAID provides helps to save lives in countries where ongoing or unanticipated crises have severely disrupted food supplies. Our analysis of prepositioning showed that this approach can have a significant effect on the United States’ and its cooperating sponsors’ ability to deliver food quickly in response to humanitarian emergencies. However, without reliable data on the delivery time frames of prepositioned shipments, USAID lacks management tools that could help it assess and improve prepositioning’s effect on emergency response times. Such data could also help USAID assess the tradeoffs between prepositioning’s timeliness and its additional warehouse, shipping, and commodity costs. Furthermore, although cost savings are not USAID’s primary goal for the prepositioning program, without systematic monitoring of the total cost of prepositioning the agency has limited ability to maximize the resources available for addressing emergency food crises. To strengthen USAID’s ability to help ensure that its food aid prepositioning program meets the goal of reducing delivery time frames in a cost-effective manner, we recommend that the USAID Administrator take the following three steps to systematically 1. collect, and ensure the reliability and validity of, data on delivery time frames for all emergency food aid shipments, including prepositioned food aid shipments; 2. monitor and assess data on delivery time frames for prepositioned food aid shipments; and 3. monitor and assess costs associated with commodity procurement, shipping, and storage for prepositioned food aid shipments. We provided a draft of this report for comment to USAID. USAID provided written comments on a draft to this report, which we reprinted in appendix VI. In its comments, USAID concurred with our recommendations. In addition, USAID stated that it is working to identify actions needed to ensure the collection of reliable data on delivery time frames for all emergency food aid. USAID also stated that it is revising its prepositioning program strategy to provide a framework for monitoring and assessing data on the timeliness and cost effectiveness of the prepositioning program. In addition to providing copies of this report to your offices, we will send copies to interested congressional committees. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9601 or MelitoT@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We examined (1) the effects of USAID’s prepositioning on delivery time frames for emergency food aid shipments, (2) the effects of prepositioning on the costs of emergency food aid, and (3) the extent to which the agency monitors prepositioning to manage program resources effectively. To examine the effects of prepositioning on delivery time frames, we collected shipment-level data from the World Food Program (WFP) and six freight forwarders for 19 other cooperating sponsors. We assessed the reliability of these data by asking WFP and the six freight forwarders how the data were collected, what quality checks were performed, and what other internal controls were in place. We determined that the data were sufficiently reliable for tracking the delivery time frames of emergency food aid for WFP and three of the freight forwarders that manage the food aid supply chain for nine cooperating sponsors. For the remaining three freight forwarders, the data were not sufficiently reliable for tracking delivery time frames because of a number of limitations, including incomplete or missing data and, in one freight forwarder’s data, a lack of distinction between emergency and nonemergency shipments. The data that we included in our analysis of the effects of prepositioning on delivery time frames include the date when the cooperating sponsor requested the food (or the closest approximation) and the date when the food arrived at the discharge port or in the recipient country. In addition, the data indicate whether the shipment was from prepositioning. The data also include other shipment characteristics—for example, the loading port, the discharge port, and the commodity shipped—and indicate whether the shipment was an emergency or nonemergency shipment and whether it was bulk or packaged. We separately discuss our analysis of data provided by WFP and the other cooperating sponsors’ three freight forwarders because WFP’s and these freight forwarders’ data differ in the time periods covered and their treatment of food aid diversions. Time periods covered. Data provided by WFP cover shipments in fiscal years 2009 through 2012, while data provided by the three freight forwarders cover shipments in fiscal years 2007 through 2012. Treatment of food aid diversions. Data provided by WFP do not distinguish shipments that USAID diverted from the prepositioning process from shipments that it prepositioned, while data provided by the freight forwarders do identify diversions. Despite this difference, our analysis suggests that using WFP’s data to estimate the number of days saved from prepositioning would not have affected our estimates significantly. To analyze delivery time frames, we calculated the number of days between a cooperating sponsor’s request for the commodities and the commodities’ arrival at the discharge port or in the recipient country. To compare delivery time frames for overseas and domestic prepositioning with those for standard shipping for WFP and for other cooperating sponsors, we estimated ordinary least squares regression models to control for characteristics of prepositioning and standard shipments that would allow us to isolate the effect of prepositioning. To examine the effects of USAID’s diversions of emergency food aid shipments on delivery time frames, we used data primarily from the freight forwarder responsible for tracking all diverted shipments for World Food Program (WFP) and for eight other cooperating sponsors. This freight forwarder’s data include all diverted food aid shipments from fiscal years 2007 through 2012. For the comparison group of standard shipments, we used data from this freight forwarder in addition to data from two other freight forwarders. We determined that these data were sufficiently reliable for tracking delivery time frames. Like the data that we included in our analysis for prepositioning, the diversions data are at the shipment level and include the date when WFP and other cooperating sponsors requested the food (or the closest approximation) and the date when the food arrived at the discharge port or in the recipient country. In addition, the data indicate whether the shipment was diverted. The data also include other shipment characteristics—for example, the loading port, the discharge port, and the commodity shipped—and indicate whether the shipment was bulk or packaged and was an emergency or nonemergency shipment. GAO analysts collected the dates USAID authorized food aid diversions by reviewing one freight forwarder’s shipment documents. To analyze additional costs associated with prepositioning, we looked at three types of cost that are unique to prepositioned food: costs for prepositioning warehouses, costs for ocean shipping from overseas prepositioning warehouses to discharge ports, and costs for procurement for prepositioning commodities. Warehouse costs. To analyze prepositioning warehouse costs, we obtained warehouse contracts from USAID. Each contract identifies the annual expenditure target that USAID had established for the warehouse as well as the daily storage charge per metric ton and outlines other charges related to handling the commodities. Ocean shipping costs. We estimated the additional cost due to prepositioning using the actual cost of the second leg of ocean shipping, because the data required to re-construct the hypothetical cost of direct shipping for every prepositioned shipment were not available to us. Although the first leg of ocean shipping—from a U.S. port to the overseas warehouse—is comparable to ocean transport for standard shipping, the second leg does not exist in the standard shipping process. We obtained data from USAID that track the mode of transportation of shipments from each of the overseas warehouses, the metric tonnage, and the freight cost. (See app. III for examples of the additional ocean transportation costs.) Commodity procurement costs. To analyze prepositioning commodity procurement costs, we focused on six commodities that accounted for 76 percent of total prepositioning purchases. We compared the weighted annual average prices of commodities purchased for domestic prepositioning, overseas prepositioning, and standard shipping. To understand the possible reasons for the observed differences in prices for prepositioned commodities, we interviewed U.S. Department of Agriculture (USDA) officials as well as commodity vendors. (See app. IV for the results of the price comparison.). To examine the extent to which USAID monitors prepositioning to maximize time savings and cost effectiveness, we reviewed USAID documentation related to prior, current, and future efforts to monitor its prepositioning program, specifically efforts to monitor the delivery time frames and program costs. We also interviewed USAID officials in Washington, D.C., to discuss the extent to which the agency has taken steps to monitor the program. We also interviewed WFP, one other cooperating sponsor, and freight forwarders in Washington, D.C. We reviewed evaluations and reports on USAID’s prepositioning, such as a 2008 evaluation of the program and a 2013 USAID Inspector General report on prepositioning. Using USAID criteria, we evaluated USAID’s efforts to monitor its prepositioning program. We also analyzed and assessed the reliability of data on emergency food aid shipments that we collected from cooperating sponsors’ freight forwarders to determine whether these data could be used to monitor delivery time frames of prepositioned food. We found that the data provided by three freight forwarders were not sufficiently reliable for tracking delivery time frames for emergency food aid shipments owing to a number of limitations, including incomplete or missing data and, in one freight forwarder’s data, lack of distinction between emergency and nonemergency shipments. We also reviewed relevant legislation to determine where there are any statutory requirements to monitor or evaluate USAID’s prepositioning program. We conducted this performance audit from July 2013 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A variety of factors besides prepositioning and food aid diversions may affect delivery time frames, therefore we developed a statistical model—a linear regression model—to control for factors that are likely to be associated with delivery time frames, helping to isolate the effect of prepositioning and food aid diversions. We collected shipment data from the WFP and six freight forwarders. The data indicate whether the shipment was from prepositioning or was standard and include the date (or the closest approximation) when the cooperating sponsor requested the food, the date of the food’s arrival at the discharge port or in the recipient country, and other shipment characteristics such as the commodity shipped. We assessed the reliability of these data by asking WFP and the six freight forwarders how the data were collected, what quality checks were performed, and whether other internal controls were in place. We determined that the data from WFP and three of the freight forwarders—74 percent of the data we collected—were sufficiently reliable for tracking delivery time frames and comparing standard, prepositioning, and diverted shipments. Table 6 shows the numbers of prepositioned, standard, and diverted shipments represented in the data that we deemed sufficiently reliable for our analysis. We excluded shipments from our analysis that were outside our scope, were duplicated in two freight forwarders’ data, or did not have a discernible delivery time frame. In addition, we found data from the remaining three freight forwarders to be insufficiently reliable for tracking delivery time frames of emergency food aid owing to a number of limitations, including incomplete or missing data and, in one freight forwarder’s data, a lack of distinction between emergency and nonemergency shipments. Table 7 shows the number of shipments that we excluded from our analysis and the reasons for their exclusion. We separately analyzed data provided by WFP and the three freight forwarders that we included in our analysis, because the two groups of data differ in the time periods covered and their treatment of food aid diversions. WFP’s data cover shipments in fiscal years 2009 through 2012, while the three freight forwarders’ data cover shipments in fiscal years 2007 through 2012. In addition, WFP’s data do not distinguish between diversions of food aid and other shipments, preventing our estimation of the number of days saved from diversions of food aid using WFP’s data. We also separately analyzed the effect of food aid diversions on delivery time frames. All food aid diversions, including diversions to WFP’s programs, are managed by one freight forwarder. For a comparison group, we used standard shipments included in the three freight forwarders’ data. We did not include WFP’s data in the comparison group because of the differences between WFP’s and the three freight forwarders’ data. GAO analysts collected the dates USAID authorized food aid diversions by reviewing one freight forwarder’s shipment documents. Because of differences between the data sources, we used slightly different start and end dates to estimate delivery time frames. In addition, our results from the linear regression model do not cover 2007 and 2008 for WFP’s shipments or other freight forwarders’ shipments not included in our analysis. Start dates. Although we defined delivery time frame as the number of days between a cooperating sponsor’s request for food and arrival at a discharge port or recipient country, the start dates we collected were the best available approximations of the date of request and differed slightly in data provided by WFP and the freight forwarders. However, USAID and WFP officials stated that the number of days between the start dates and WFP’s food request dates does not differ for prepositioning and standard shipments. USAID and freight forwarder officials also stated that the number of days between the freight forwarder start dates and the dates they request the food does not differ for prepositioning and standard shipments. For comparisons of prepositioning and standard shipments, the difference between the start dates we collected and the request dates should not affect our results. However, these differences may affect the average delivery time frames we calculated, which should be considered as context for our regression analysis rather than exact estimates. For diversions, we also used the best available approximations of the date of request—usually the date when USAID authorized the diversion. According to USAID officials, the authorization date is generally a few days after the request date. Therefore, for comparisons of diversions and standard shipments, our estimates should be accurate to within a few days. End dates. Data provided by WFP include only the date of the food’s arrival at the discharge port. Data provided by the freight forwarders include, for some freight forwarders, only the date of the food’s arrival at the discharge port for some shipments and, for other freight forwarders, only the date of the food’s arrival in the recipient country when inland transportation is required. Although this affects the average delivery time frames we calculated, it should be considered as context for our regression analysis rather than exact estimates. Moreover, we found that combining these shipments did not affect our regression estimates of number of days saved from prepositioning; inland transport from the discharge port to the recipient country is similar for prepositioned and standard shipments. In addition, we controlled for the port of arrival (either a discharge port or in the recipient country) in our analysis. See table 8 for average delivery time frames for shipments with dates of arrival at the discharge port and in the recipient country. Treatment of diversions. Data provided by WFP do not distinguish diversions from standard or prepositioning shipments. However, we estimate that this limitation may affect our estimates of number of days saved by prepositioning by only a few days. Nongeneralizable results. Our estimates are not generalizable beyond the time periods covered by the data (fiscal years 2009 through 2012 for data provided by WFP and fiscal years 2007 through 2012 for the data provided by the freight forwarders) or beyond WFP and the three freight forwarders’ data that we included in our analysis. Table 8 shows the average delivery time frames for two groups of emergency food aid shipments. The first group comprises shipments shown in WFP’s data, with delivery time frame defined as the number of days between the date when WFP and USAID signed a grant agreement and the date when the shipment arrived at the discharge port. The second group comprises shipments shown in the freight forwarders’ data, with the delivery time frame defined as the number of days between the date when the cooperating sponsor requested the food and the date when the shipment arrived at the discharge port or recipient country. To isolate the effect of prepositioning on delivery time frames for emergency food aid, we estimated ordinary least squares regression models that control for characteristics of prepositioned and standard shipments for WFP and for the nine other cooperating sponsors represented in the three freight forwarders’ data. Tables 9 and 10, respectively, show the results of our regression analysis of the data collected from WFP and the three freight forwarders, demonstrating the average number of days saved by prepositioning relative to standard shipping delivery time frames. Specifically, we estimated Y = α + βPrepo + δX + ε for the second column in each table, where Y is the delivery time frame of shipment i, Prepo is a dummy for whether shipment i is from prepositioning, and X are the control variables listed in column 1. We estimated Y = α + βDomesticPrepo + βOverseasPrepo + δX + ε for the third column, where Y is the delivery time frame of shipment i, DomesticPrepo is a dummy for when shipment i is from domestic prepositioning, OverseasPrepo is a dummy for when shipment i is from overseas prepositioning, and X are the control variables listed in column 1. We clustered the standard errors by the discharge port or recipient country. For WFP, we included the discharge port, the year of the commodity request, the month of the commodity request, and the commodity type as control variables. For the nine cooperating sponsors, we included the freight forwarder, the cooperating sponsor, the discharge port or recipient countries, the year of the commodity request, the month of the commodity request, and the commodity type as control variables. These variables are known to USAID when cooperating sponsors submit requests for commodities. Controlling for these variables allowed us to compare shipments that fulfilled requests with similar characteristics. We did not control for the shipment’s tonnage, the port of loading, and other variables that USAID determines when it decides whether to fulfill a sponsor’s request with commodities sourced from the standard shipping process or from prepositioned warehouses. To isolate the effect of diverted shipments on the delivery time frame of emergency food aid, we estimated ordinary least squares regression models that control for characteristics of diverted shipments, prepositioning shipments, and standard shipments. Table 11 shows the results of our regression analysis, demonstrating the average number of days saved by diversions relative to standard shipping time frames. Specifically, we estimate Y= α + β + β + βOverseasPrepo + δX + ε for the second column, where Y is the delivery time frame of shipment i; Divert is a dummy for when shipment i is diverted; DomesticPrepo is a dummy for when shipment i is from domestic prepositioning; OverseasPrepo is a dummy for when shipment i is from overseas prepositioning; and X are the control variables listed in column 1. We clustered the standard errors by the discharge port or recipient country. We include the freight forwarder, the cooperating sponsor, the discharge port or recipient countries, the year of the commodity request, the month of the commodity request, and the commodity type. These variables are known to USAID when cooperating sponsors submit requests for commodities. Controlling for these variables allowed us to compare shipments fulfilling requests with the same characteristics. We did not control for the shipment’s tonnage, the port of loading, and other variables that USAID determines when deciding whether to fulfill a sponsor’s request with commodities sourced from the standard shipping process or with food aid diversions. To derive examples of the additional cost of the second leg of ocean shipping for prepositioning, we first estimated the costs of shipping from the United States to the prepositioning warehouse in Djibouti and from the Djibouti warehouse to three discharge ports—Mombasa, Kenya; Dar es Salaam, Tanzania; and Beira, Mozambique—in fiscal year 2012. We then compared those estimates with the estimated cost of shipping from U.S. ports directly to the three ports. (See table 12). To identify additional commodity costs associated with USAID’s domestic and overseas prepositioning of emergency food aid, we compared the weighted average annual prices paid for six key prepositioned commodities with the weighted average annual prices paid for each of those commodities for standard shipments of emergency food aid in fiscal years 2007 through 2012. We found that, relative to standard shipments, the weighted average annual prices were higher relative to standard shipments for domestically prepositioned commodities more often than for overseas prepositioned commodities. As figure 3 shows, for domestically prepositioned commodities, the weighted average annual prices were higher relative to standard- shipment commodities in 24 instances and lower in 8 instances. For overseas prepositioned commodities, the prices relative to standard- shipment commodities were higher in 13 instances and lower in 9 instances. Additionally, the size of the percentage differences between prices for prepositioned commodities relative to standard-shipment commodities was generally larger for domestic prepositioning than for overseas prepositioning. For domestically prepositioned commodities, the difference exceeded 15 percent in 7 instances; for overseas prepositioned commodities, the difference reached 15 percent in only one instance. Weighted annual average prices were consistently higher for two domestically prepositioned commodities, corn-soy blend (2007-2011) and vegetable oil (2007-2012). To further examine price differentials between prepositioned and standard-shipment commodities, we analyzed the number of purchases per year in fiscal years 2007 through 2012. For commodities that the U.S. government purchased infrequently, the differences in the average prices might be a result of monthly commodity price fluctuations. For example, the U.S. government made only one purchase of pinto beans in 2008 and two purchases in 2009 for domestic prepositioning. In comparison, there were 81 purchases of pinto beans in 2008 and 75 in 2009 for standard shipments. Table 13 lists the numbers purchases for domestic prepositioning and for standard shipment in fiscal years 2007 to 2012. These products can be downloaded from the links shown or can be found on the GAO website at http://www.gao.gov . Global Food Security: USAID Is Improving Coordination but Needs to Require Systematic Assessments of Country-Level Risks. GAO-13-809. Washington, D.C.: September 17, 2013. E-supplement GAO-13-815SP. International Food Assistance: U.S. Nonemergency Food Aid Programs Have Similar Objectives but Some Planning Helps Limit Overlap. GAO-13-141R. Washington, D.C., December 12, 2012. International Food Assistance: Improved Targeting Would Help Enable USAID to Reach Vulnerable Group. GAO-12-862. Washington, D.C.: September 24, 2012. World Food Program: Stronger Controls Needed in High-Risk Areas. GAO-12-790. Washington, D.C.: September 13, 2012. Farm Bill: Issues to Consider for Reauthorization. GAO-12-338SP. Washington, D.C.: April 24, 2012. International Food Assistance: Funding Development Projects through the Purchase, Shipment, and Sale of U.S. Commodities Is Inefficient and Can Cause Adverse Market Impacts. GAO-11-636. Washington, D.C.: June 23, 2011. International Food Assistance: Better Nutrition and Quality Control Can Further Improve U.S. Food Aid. GAO-11-491. Washington, D.C.: May 12, 2011. International School Feeding: USDA’s Oversight of the McGovern-Dole Food for Education Program Needs Improvement. GAO-11-544. Washington, D.C.: May 19, 2011. International Food Assistance: A U.S. Governmentwide Strategy Could Accelerate Progress toward Global Food Security. GAO-10-212T. Washington, D.C.: October 29, 2009. International Food Assistance: Key Issues for Congressional Oversight. GAO-09-977SP. Washington, D.C.: September 30, 2009. International Food Assistance: Local and Regional Procurement Can Enhance the Efficiency of U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-570. Washington, D.C.: May 29, 2009. International Food Security: Insufficient Efforts by Host Governments and Donors Threaten Progress to Halve Hunger in Sub-Saharan Africa by 2015. GAO-08-680. Washington, D.C.: May 29, 2008. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 13, 2007. In addition to the contact named above, Valérie Nowak (Assistant Director), Farahnaaz Khakoo-Mausel, Teresa Abruzzo Heger, Ming Chen, Fang He, Rhonda Horried, Carol Bray, Martin De Alteriis, Justin Fisher, Mark Dowling, Reid Lowe, Todd Anderson, Sushmita SriKanth, John O’Trakoun, Barbara Shields, Patrick Hickey, Gergana Danailova-Trainor, Gezahegne Bekele, and Etana Finkler made key contributions to this report. | Through Title II of the Food for Peace Act, the United States provides U.S. agricultural commodities to meet emergency food needs in foreign countries. In fiscal years 2007 to 2012, USAID delivered $9.2 billion in emergency food aid to recipient countries through cooperating sponsors. In 2000, Congress authorized USAID to order, transport, and store food for prepositioning in both overseas and domestic locations. Through prepositioning, the agency orders food before it is requested and stores it in warehouses in or near regions with historically high needs. GAO was asked to examine U.S. international food aid procurement. This report examines (1) the effects of prepositioning on emergency food aid delivery time frames, (2) the effects of prepositioning on the costs of the food aid, and (3) the extent to which the agency monitors prepositioning to maximize time savings and cost effectiveness. GAO analyzed data on delivery time frames and costs; reviewed agency documents; and interviewed agency officials and representatives from WFP, other cooperating sponsors, and ocean freight contractors. The U.S. Agency for International Development (USAID) reduces the average delivery time frame for emergency food aid by prepositioning food domestically—that is, in warehouses in the United States—and overseas. GAO estimates that compared with USAID's standard shipping process, which can take several months, prepositioning food aid shortened delivery time frames by an average of almost a month for shipments to the World Food Program (WFP). GAO also estimates that prepositioning shortened delivery time frames by an average of more than 2 months for other organizations—“cooperating sponsors”—that receive USAID grants. In addition, USAID reduces delivery time frames when it diverts shipments en route to overseas prepositioning warehouses to areas with immediate needs. For all cooperating sponsors, GAO estimates that diversions saved, on average, about 2 months. Prepositioning food can increase the cost of emergency food aid because of additional warehouse, shipping, and commodity costs. For example, in fiscal year 2012, USAID paid approximately $8 million for its overseas and domestic prepositioning warehouses. USAID also paid $13 million to ship food by ocean freight from overseas prepositioning warehouses to recipient countries, in addition to the cost of shipping from the United States to the warehouses. Further, USAID generally paid higher weighted annual average prices for domestically prepositioned commodities than for standard shipment commodities. U.S. officials and vendors noted that factors such as limited commodity supplies and few participating suppliers may have contributed to higher prices. USAID has taken some steps to evaluate prepositioning, but the agency does not collect and analyze data needed to systematically monitor delivery time frames for prepositioned commodities. In addition, some available data are unreliable. Further, USAID does not systematically monitor the total cost of prepositioning. According to USAID policy and federal internal control standards, the agency should monitor its programs by collecting and analyzing data to guide higher-level decision making and allocate resources. Without such monitoring, USAID is limited in its ability to assess prepositioning's impact on delivery time frames and costs and to maximize emergency food aid's timeliness and cost effectiveness. GAO recommends that USAID systematically collect, and ensure the reliability of, data for prepositioned food aid and systematically monitor and assess the effectiveness of food aid's delivery time frames and costs. USAID concurred with the recommendations and is working to improve both the collection of reliable data and its monitoring of prepositioning. |
As the government’s financial manager, FMS establishes and implements collections policies, regulations, standards, and procedures for the federal government. Through its collections program, FMS also provides services to federal agencies to collect, deposit, and account for federal collections. Its collections program provides a means for individuals and organizations, including businesses, state and local governments, and nonprofit organizations, to remit funds such as taxes, duties, fees, sales, leases, and loan repayments to the government. FMS offers a range of fully electronic, partly electronic, and nonelectronic collection methods (see table 1 and fig. 1). The agencies we examined for our case studies use a variety of these collection methods (see table 2). FMS manages the services through which all federal collections are deposited in the Treasury and strives to minimize the time between when funds are collected and when they are deposited in the Treasury. Treasury regulations state that when it is cost effective, practicable, and consistent with current statutory authority, electronic transfers of funds are the optimal method for federal collections, especially when fees are recurring or of large dollar amounts. Accordingly, FMS may require an agency wishing to use a collection method other than electronic transfer to provide a cost-benefit analysis to justify this selection. In past work, we highlighted the benefits of electronic collection processing. Specifically, we reported that electronic collections provide better accuracy, lower mailing and processing costs, and fewer delinquencies and defaults. When the Federal Reserve moved to electronic conversions of paper checks, work hours spent on check processing decreased by almost half and transportation costs associated with check processing decreased by about 11 percent. In recent years FMS has made it a priority to increase the use of electronic collection methods and reduce collection costs. FMS has begun an initiative, called the Holistic Approach, to further these goals. Selection of the best collection mechanism is a joint responsibility of agencies and FMS. Agencies have responsibility for working with FMS to conduct cash-management reviews, gathering volume and dollar data relative to the operation of the systems, and funding any implementation and operational costs above those normally funded by Treasury. FMS provides guidance to agencies in its Treasury Financial Manual regarding the selection and cost-effective use of collection mechanisms. Agencies must provide FMS with a recommended mechanism for any new or modified cash flows. FMS reviews the recommendations, approves a mechanism, and assists with implementation. FMS’s oversight of federal agencies’ cash-management activities includes review of collections. FMS uses findings from such reviews to develop initiatives to improve an agency’s collections and set and monitor dates by which the agency must implement changes. Moreover, FMS may charge agencies that do not meet the implementation deadlines for the amount of interest savings that would have been realized by timely implementation. According to Treasury regulations, when funds are not collected electronically, agencies generally must deposit funds into the Treasury or a designated depositary on the day of receipt. FMS’s collections program is funded in part by permanent and indefinite appropriations for all financial agent (banking) services required or directed by Treasury. In some cases federal agencies reimburse FMS for a part or all of these costs. For fiscal year 2009, FMS obligated $568 million for banking services, an 8 percent increase over the $528 million obligated in fiscal year 2008. Fiscal year 2009 reimbursements for these banking services were $92 million. Such reimbursements are deposited in the Treasury’s general fund. Since 2005, agency use of collection methods has reflected FMS’s increased focus on electronic payments. Fully electronic payments accounted for more than 80 percent of dollars collected by agencies other than IRS for fiscal years 2005 through 2009, with $441 billion of the almost $509 billion collected using fully electronic methods in fiscal year 2009. While the percentage of funds collected through nonelectronic methods in 2009 was low, it constituted over $36 billion. Nor does it represent a similarly low percentage of transactions. For example, in fiscal year 2008, MMS’s Minerals Revenue Management program collected over $23 billion in federal rents and royalties from the public; while payments by check represented only 2 percent of the dollars collected, they represented 77 percent of the total number of its transactions. In fiscal year 2009, of the almost $509 billion in non-IRS collections, about $68 billion was collected as cash or checks and processed using either partly electronic or nonelectronic collection methods. As shown in figure 2, there was a significant shift from nonelectronic to partly electronic methods from 2005 to 2009. In 2005, partly electronic collection methods accounted for just under 2 percent of cash and check collections, but by 2009 this share had increased to over 46 percent. Given the significant process and cost differences between fully electronic and partly electronic collection methods, we distinguish between them in this analysis. The growth in partly electronic payments largely represents a change in agency processes rather than in payer behavior: the shift is largely the result of a growth in electronic check processing capacity both at agencies and through lockbox banks. In fiscal year 2008 almost $24 billion in collections settled as paper checks at lockbox banks, but in fiscal year 2009 less than $2.4 billion was settled this way. Conversely, collections through electronic check processing were about $27 billion in fiscal year 2008, but were over $31 billion in fiscal year 2009. FMS attributes this growth to a large marketing effort and new electronic check processing locations (according to FMS officials, 209 new locations—185 agency PCC and 24 lockbox ECP locations—were added in fiscal year 2009). FMS plans to discontinue some current electronic collection methods and shift programs that use those methods to Pay.gov—a Web-based portal that processes collections made through ACH transfer and credit card— which will consolidate and simplify the number of collection methods available through FMS. FMS officials told us that they expect ACH pre- authorized debits will be phased out and replaced by Pay.gov by the end of calendar year 2009. In addition, FMS plans to shift most of the existing 32 lockbox ACH accounts to Pay.gov or other electronic collection methods, leaving only 2 such accounts in existence at the end of calendar year 2010. There are also plans to shift certain collections from nonelectronic methods to Pay.gov; for example, MMS is currently shifting its rent collections from paper checks to Pay.gov and, according to MMS officials, will continue to expand its use of Pay.gov in 2010. FMS, case-study agencies, and the payer groups we interviewed have identified a variety of cost savings stemming from the use of electronic collection methods. These savings stem from more efficient agency processing, expected lower future costs for system changes, the acceleration of deposits to the Treasury, and a lower administrative burden for payer groups. The shift to electronic methods has also mitigated some security and accuracy risks for agencies. Nevertheless, some organizations identified agency-specific circumstances that make full adoption of electronic collection methods less beneficial. These circumstances were determined by the characteristics of an agency’s payer base, other agency considerations, and the initial system or equipment costs related to the transition. Officials from all five case-study agencies cited a decrease in current or estimated future costs resulting from the use of electronic collection methods but none of the agencies could fully quantify these savings. In some cases, agencies increased the efficiency of internal processing operations. For example, officials at USGS said their 2008 move from nonelectronic check processing to partly-electronic paper check conversion increased process efficiency. Specifically, according to officials in one USGS office that underwent the transition, the change reduced the time it takes staff to process checks, thus freeing up staff to undertake additional tasks. This office disseminates USGS products such as maps and books and processes checks, credit card payments, and small amounts of cash from its sales. Prior to the transition, the nonelectronic check process required manual preparation of the deposit and a staff member to take the deposit to the bank each day. The new partly electronic process permits a Web-based deposit process and reduces the need to physically go to the bank. In another case, a decrease in agency processing costs has the potential to reduce the size of cost-recovery fees. A NOAA regional permit official said that an upcoming shift from using a lockbox bank to Pay.gov for its fee collections is expected to increase staff efficiency and reduce payment processing costs such as the office’s mailing costs. If these cost savings are realized, the cost reductions may be passed along to the payers of the fee. In addition, fully electronic collection methods may incorporate electronic submission of remittance data, enabling automatic transfers of payment and remittance data to the agency’s accounting system. For example, to account for ACH transfers after funds are deposited in the Treasury, MMS prints out detailed payment information from FMS, matches payment information to the correct payer, and manually enters the detailed data into MMS’s internal accounting system. MMS officials told us that they are planning to update their accounting software to make this process automatic. They also said they expect that the implementation of Pay.gov will be designed to automate this process and thus reduce costs. In May 2008 USPTO shifted its credit card processing function to Pay.gov both to decrease current costs and avoid future system costs for the agency. Adopting Pay.gov enabled USPTO to discontinue the lease of a line to a credit card authorization provider, a monthly savings of about $1,000. Agency officials also said they expect that costs associated with future agency system changes stemming from a change in the FMS credit card processor will be eliminated because Pay.gov will manage the transition. Potential exists for similar savings at other agencies; according to officials at NPS and USGS, they also incurred costs for system and equipment changes that were required after FMS changed its credit card processor. Some payer groups also cited benefits of a shift towards electronic collection methods. Three of the four payer groups we spoke with reported that the increased use of electronic methods has improved efficiency and saved money for their organizations or members by reducing administrative time, costs, or both. For example, state government representatives agreed that for many organizations, paying by ACH transfer is preferred because paying by check is expensive in terms of the cost and time of printing, mailing, and reconciling payments. More specifically, a representative of the state of Mississippi said that the state requires all vendors, besides federal agencies, to accept payments from the state electronically. The specific challenges faced by these payers when using federal agency collection methods is discussed later in this report. As we have previously reported, the acceleration of deposits to the Treasury can reduce the amount Treasury needs to borrow each day to pay government obligations. According to FMS, moving to electronic check processing reduces processing time by 1 day on average, whether done at the agency or at a lockbox bank. These total times range from 1 day to 6 days depending on the collection method selected and whether mail time is included (see table 3). In the case of paper check conversion, this improvement brings check processing on par with some fully electronic collection methods such as ACH transfers and credit cards. Although the shift to electronic collection methods has increased efficiencies and decreased costs, agencies are generally unable to consider the full range of the federal government’s expenses—specifically FMS’s total collection costs—when analyzing program costs and setting fee rates. This is because FMS generally does not provide these cost data to agencies, although according to officials, it could. FMS officials stated that FMS does not track cost of collection information by agency, but instead by collection method and bank. FMS has this cost information and could provide it, but based on past practices, it generally has not. As we have previously reported, reliable information on the costs of federal programs and activities is crucial for effective management of government operations. Currently, agencies make decisions about collection methods without the benefit of this information. Moreover, agencies could determine whether such costs should be considered in the design and level of full-cost recovery fees. To the extent such cost data are provided, agencies that are authorized to charge full-cost recovery fees—for example, fees charged under the Independent Offices Appropriation Act of 1952 (IOAA)—could, in some cases, include FMS’s cost of collections in their fee rates and deposit these funds into the Treasury. Officials from all five case-study agencies and FMS stated that use of electronic collection methods have reduced the risk of either security problems or processing errors. Four of the five case-study agencies stated that electronic collection methods alleviated security concerns for staff members, reduced the risk of theft, or both. As noted above, prior to the move to paper check conversion, USGS staff drove deposits to a local bank, exposing staff to risk of theft or injury. USGS officials said that the move to paper check conversion reduced these safety concerns. Alternatively, some agencies reduced concerns about staff safety by making bank deposits by means of a courier service. However, using couriers poses other security risks and adds costs to the collections process. For example, MMS officials said that courier costs at its Denver office are approximately $80,000 a year for two daily mail deliveries and one daily bank deposit. NPS guidance also deals with security concerns pertaining to cash collections and the use of personnel to collect fees. For example, this guidance notes that parks’ implementation of a type of ACH transfer that is primarily used for commercial tour groups reduces the amount of cash handled by staff and therefore improves security. In addition, officials at the Rocky Mountain National Park said that permitting unattended entrance stations to accept only credit cards, rather than both cash and credit cards, reduced collection costs and made the machines less vulnerable to attempted theft. Officials from three of our case-study agencies stated that the electronic collection or provision of payment data has lowered the risk of processing errors, reducing repeated work and lost time and effort. USGS staff said that using paper check conversion has reduced the errors in the process because the system confirms deposit totals before completing a deposit. Similarly, MMS officials reported a decrease in administrative errors stemming from the use of Pay.gov for a selection of its fees. USPTO’s adoption of an automated method for uploading remittance information for organizations making multiple payments also reduced errors. In some cases, USPTO may receive several thousand patent renewals in one submission from a single company that manages these renewals for multiple customers. In the past, these renewals would be sent to a lockbox bank and remittance information would be entered manually. This new system allows maintenance fee payment information to be submitted on a single compact disc and the data file on the disc to be uploaded directly into USPTO’s internal accounting system. An agency’s payer base characteristics and other issues can influence collection method selection, sometimes causing agencies to leave nonelectronic collection methods in place and incur costs associated with maintaining separate collection methods. In its work with agencies, FMS recognizes the importance of considering an agency’s payer base and whether those payers are likely to accept electronic collection methods. All five of our case-study agencies stated that payer characteristics and customer needs affected the selection of collection methods. In some cases, customer preferences as well as customer access to banks and online banking systems have influenced the different payment methods that agencies offer. MMS said that its customers, while typically large companies, also include smaller operations that may not be amenable to electronic collections. When MMS mandated electronic submission of royalty collections, it granted waivers for those that appealed the requirement—approximately 100 of the 2,100 entities for which it processes royalty revenues. Although a small percentage, such accommodation requires that the agency manage a waiver system and requires the maintenance of a nonelectronic or partly electronic collection method alongside the new fully electronic collection method. In the case of MMS, the agency maintained its nonelectronic check processing for collections other than royalties, so the nonelectronic processes were not solely maintained for those customers with waivers. A representative of a commercial tour operator industry group stated that smaller operators in his membership would prefer the maintenance of a more flexible method of payment for NPS entrance fees than that preferred by larger operators. NPS is considering adopting Pay.gov to allow customers to use a Web portal to make online purchases. The tour representative recommended that NPS continue to accept credit cards from commercial operators at its entrances in order to accommodate smaller operators as the agency shifts to Pay.gov. While credit cards are an electronic collection method, the desire to maintain this alternative payment option in order to better serve its customers imposes costs on the agency, such as the purchase of necessary equipment. Other agency considerations also influence the selection and adoption of collection methods. MMS officials told us that they decided to pursue Pay.gov for MMS’s rent collections rather than selecting a commercial lockbox because a lockbox service would not meet their internal process needs. Although the shift to Pay.gov is underway, officials acknowledge there will always be some paper check collections to manage. In another case, NPS officials told us that the services provided by those performing collection activities go beyond the collection itself. They explained that they do not wish to remove all representatives from entrance fee collection stations—even if it were possible—because fee collectors are often “ambassadors” for the park and provide an important public service. Nonetheless, it requires that NPS incur some level of labor costs related to collections. For two case-study agencies the setup costs or required system changes necessary to implement electronic collection methods affected the ability or the extent to which the agency or program office could adopt or maintain electronic collection methods. In 2002 and 2003, one NOAA regional permit office used Pay.gov as a payment method. The office discontinued the use of Pay.gov because of changes to the office’s database system and lack of information technology support required to establish the new system’s connection to Pay.gov. Nonetheless, a current project aims to reestablish the link to Pay.gov. NPS reports that at some remote park locations the telephone and Internet access necessary to support electronic collection methods may be difficult to establish. Even in less remote areas, geographic dispersion may create challenges, such as the need to maintain and purchase equipment for multiple collection or processing points within one park. For example, Rocky Mountain National Park, which covers 265,800 acres and has four entrances, maintains two remittance offices to prepare collections for deposit because the road crossing the park also crosses the continental divide and is impassable during winter months. Therefore, collections cannot be transported from one side of the park to the other and must be prepared for deposit separately. Finally, meeting the varying system requirements for some electronic collection methods can also be a barrier for customer use. Officials from three state governments that submit payments to USGS told us that, while they prefer electronic collection methods such as ACH transfers, the differing technical requirements and data formats at each federal agency can make such transfers a burdensome, manual process. Our prior work has found a similar lack of standardization across federal grant-making agencies. For example, the use of multiple payment systems has resulted in an excessive administrative burden for grantees. FMS officials stated that limited funding is often cited as a barrier to agency adoption of new collection methods. As the financial manager and principal fiscal agent for the federal government, FMS is responsible for planning and managing federal collections and has oversight responsibilities to ensure that agencies are making adequate progress in improving their cash-management practices. Consistent with these responsibilities, FMS developed a 5-year “Holistic Approach” plan, which establishes a framework for increasing the use of electronic collection mechanisms governmentwide; streamlining the collections process; offering collection mechanisms that are easy to use, convenient, and secure; managing depositary services that banks provide to federal agencies; and providing timely collection of federal receipts. To implement this plan, FMS ranked agencies for review based largely on their dollar volume of collections using nonelectronic methods. It plans to review the collections of and draft Strategic Cash Management Agreements with 112 agencies spread about evenly over fiscal years 2009 through 2013. During the development of the agreements, FMS reviews all agency collections, including those already using electronic methods, to see if a more efficient method could be adopted. According to FMS officials and as described above, the payer and type of collection are key factors in choosing a collection method. Each interagency Strategic Cash Management Agreement will outline the methods currently used by the agency for each of its collections; recommend electronic collection mechanisms; set conversion timelines, agencywide goals, and metrics; and estimate savings from conversion. The agreements commit both FMS and the agencies to implement improvements to agencies’ overall cash- management practices. As laid out in its Holistic Approach plan, FMS estimates savings of conversion from nonelectronic to electronic collection methods as part of each agency review. FMS estimates that average per transaction collection costs are $1.679 for nonelectronic methods and $0.897 for electronic methods. Using these figures, FMS estimates that shifting to electronic collections will save an average of $0.78 per transaction. However, there are significant differences in cost per transaction between different electronic collection methods which may be obscured by using only these two broad categories to estimate savings. For example, FMS estimates that, on average, ACH transfers processed through Pay.gov cost $0.64 per transaction, credit card transactions cost $1.30 per transaction, and lockbox ECP costs $0.58 per transaction. Although the cost of using different collection methods also can vary by the dollar amount of the transaction, FMS officials said they do not use volume threshold guidelines when working with agencies to select payment methods. Specifically, credit card merchant fees are, in part, a percentage of the value of the transaction. Though it may be less costly to process a small collection by credit card than by check, the reverse may be the case for large-dollar-value collections. We recently reported that the average merchant discount rate FMS paid in fiscal year 2007 was 1.43 percent. In a hypothetical example using this rate, the total fees for a $100 transaction would be $1.43 and the fees for a $10,000 transaction would be $143. FMS limits individual credit card transactions to under $100,000 in order to limit merchant fees. FMS’s Holistic Approach plan does not provide for all available incentives to encourage agencies to increase the use of more cost effective collection methods. By using FMS services, agencies can reduce their own costs of collection. Specifically, as noted earlier, agencies that collect credit card payments through Pay.gov would be able to avoid the costs of future system changes because with Pay.gov those changes would be handled by FMS. The Holistic Approach plan includes a provision for an inefficiency charge of $1.00 per transaction for any collection not converted by the deadline as outlined in the agreement. Although FMS has the authority to assess inefficiency charges against agencies regardless of an agreement, according to FMS officials, the charge is only assessed on agencies that voluntarily sign an agreement and then only if the agency misses agreed- upon and likely flexible deadlines. Furthermore, FMS officials said they are not using the Holistic Approach to review whether agencies are paying for certain collection services as required. FMS guidance—Treasury Bulletin 94-07—requires that agencies reimburse FMS for ancillary services and for standard lockbox collection services unless lockbox processing provides a net financial benefit to the Treasury’s general fund. In one case, we found that charges for a NOAA lockbox were initiated by FMS and then halted for an unknown reason. In the past, NOAA reimbursed FMS for services on one of its lockboxes. However, when the lockbox was moved to a different bank in 2005, the reimbursement charge was discontinued. FMS officials do not have any records of why they ceased to charge NOAA; NOAA officials were also not aware of the reason why the charges were discontinued. By not reviewing agency responsibility to pay for collection services, FMS does not make use of an available incentive to agencies to move to more efficient collection methods. FMS officials stated that as part of the review, FMS negotiates the collections proposals with the agency, but that agencies may have other, higher priorities than making the investments needed for a change. If FMS also reviewed whether the agency should reimburse FMS for the collection services and charged the agency based on the findings of that review, the agency would have a financial incentive to adopt the more cost-effective method. The Holistic Approach plan does not include a strategy for communicating lessons-learned from earlier reviews to agencies. Without such information, agencies not scheduled for review until later years might not have an opportunity to correct common problems. As part of the reviews, FMS assesses whether the agencies use unauthorized or inefficient cash management practices. In its reviews to date, FMS found that two agencies were holding funds outside of TGA banks and is aware of some inefficient collection practices that do not rise to the level of being unauthorized cash-management practices. In our case-study reviews we also found examples of inefficient collection practices, including the following: On a few occasions, USPTO accepted credit cards as payment for multiple, individual sale transactions processed on the same day to the same credit card account number. This resulted in the sum of the payments to the same credit card account number to be at or near the dollar limit for credit card transactions. Because credit card fees are based in part on a percentage of the dollar amount charged, the cost of processing large collections by credit card may be more than by other methods. NOAA maintains some lockbox accounts with volumes below what FMS officials told us was their rule-of-thumb for lockbox services. FMS does not have an official volume threshold for lockboxes, but officials stated that processing checks through a lockbox might be a good choice for a program collecting a minimum of 1,000 checks and $1 million per month. Of the eight NOAA lockboxes, all had calendar year 2008 collections volume that averaged less than 1,000 checks per month and only one processed over $1 million per month on average; one lockbox processed 208 checks totaling less than $5,000 that year. Check volume affects the per-transaction cost of lockbox services because some lockbox bank charges are fixed (e.g., monthly account maintenance charges). As a standard practice for one NOAA regional permit office, payers mailed checks to the office and then NOAA mailed the checks to the lockbox bank for processing on a weekly basis. This practice delays deposit of the collections. Agencies not scheduled for review until later years also may lack information on how to overcome barriers to the use of electronic collection methods or invest in more cost-effective collection methods. Such barriers include agency regulations that define the methods an agency uses to make collections, and payers that are less amenable to electronic collection methods. With better information about the capabilities and benefits of the various collection methods, agencies could in turn communicate that information to their payer groups. According to payer groups we spoke with, their members would be more likely to adopt electronic collection methods if agencies encouraged them. According to FMS, existing agency collection contracts and systems can be a barrier to adoption of more efficient collection methods. Some FMS collections guidance is outdated, but the Holistic Approach plan does not include a strategy for updating guidance based on lessons- learned from the agency reviews. The primary guidance to agencies on the various options for collection methods—FMS’s Cash Management Made Easy—was last updated in April 2002 and includes outdated descriptions of some methods. FMS’s guidance for the situations in which agencies must reimburse FMS for costs of lockbox services—Treasury Bulletin 94- 07—is, in some respects, also outdated. It bases the responsibility to pay for the costs of standard lockbox services on the net benefit to the Treasury’s general account of accelerated deposits. However, since the time the bulletin was issued, electronic processing of checks has become an option for agencies. In some cases, this new option may be a more cost- effective choice. FMS has made important progress helping the federal government improve the efficiency and effectiveness of non-IRS collections; in fiscal year 2009, almost 87 percent of these funds were collected using fully electronic methods. As work continues to overcome barriers to electronic collection methods, several benefits continue to accrue: the cost of government borrowing is decreased as the time to process collections is reduced; agencies, customers, and FMS may enjoy lower costs of collection; and security of collections and staff are improved. Reliable information on the costs of federal programs and activities is crucial for effective management of government operations. However, it can be difficult for agencies to effectively manage their programs or make informed choices among collection options because FMS generally does not provide agencies data or information on FMS’s costs of collections for a given agency, program, or collection method. FMS has cost information by collection method but generally does not provide it to agencies. While FMS officials said that they could and have provided cost information upon request, we believe provision of such data should not be ad hoc. Rather, data should be distributed systematically to facilitate agency program management. The lack of information on FMS’s costs of collections means that agencies do not have complete information for analysis of their fee structure and level. For some full-cost recovery fees this means that the federal government may be inappropriately foregoing revenues. FMS’s ongoing initiative to analyze and review the collection activity in each agency through implementation of its Holistic Approach plan facilitates growth in the use of electronic collection methods. However, two aspects of the approach may lead to decisions that are not the most cost-effective. First, FMS groups all fully and partially electronic methods together when developing an estimate of the cost savings of shifts from nonelectronic to other collection methods. Second, FMS does not make use of all available financial incentives—including enforcing its own guidance by requiring agencies to reimburse it for certain collection services. Finally, some inefficient agency collection practices may persist longer than necessary because FMS’s Holistic Approach plan does not include either a strategy to communicate key lessons-learned from early agency reviews to other agencies whose reviews are scheduled for future years or a way to use the information to update FMS guidance to agencies. Interim updates of collections guidance and regulations could allow agencies to benefit from key lessons-learned during FMS reviews. We are making five recommendations in this report. To strengthen oversight of the costs of collecting federal fees and other receipts, we recommend that the Secretary of the Treasury direct the FMS Commissioner to take the following four actions: (1) Provide each agency with information on FMS’s annual costs of processing related to that agency’s collections by, for example, providing information on the agency’s total collections, by collection method, and FMS’s costs by collection method. (2) Revise FMS’s Holistic Approach plan to assure that the reviews of agency collections will consider the differences in costs of the various electronic collection methods. (3) Enforce FMS guidance by a. services are either ancillary or are lockbox services not providing a net benefit to the Treasury’s general account and so should be reimbursed by the agency; and specifying criteria for determining whether FMS collection b. using these criteria during Holistic Approach plan reviews to determine whether each agency should reimburse FMS for services and document that decision. (4) Establish a process for updating collections guidance and regulations based on key lessons-learned from its reviews and communicating that information to all agencies so that agencies whose review is scheduled for later years can begin to implement changes. We recommend that the Secretaries of the Interior and Commerce include FMS’s costs of collection, as available, in analyzing MMS, NPS, USGS, USPTO, and NOAA programs and, as appropriate, the design and level of user fees. We provided a draft of this report to the Secretaries of the Treasury, the Interior, and Commerce for review. We received written comments from the Department of the Treasury’s Financial Management Service, the Department of the Interior, and the Department of Commerce, which are reprinted in appendixes VII, VIII, and IX, respectively. In addition, Treasury provided technical comments, which we incorporated as appropriate. We also provided portions of the report to nonfederal stakeholders for their review and made technical corrections as appropriate. FMS agreed with our recommendations and stated that it will develop an action plan to address each recommendation. Initially, the draft’s third recommendation said that, while implementing the Holistic Approach reviews, FMS should specify criteria for determining whether its collection services are ancillary and should therefore be reimbursed by the agency. FMS commented that it is working on establishing such criteria in an initiative separate from the Holistic Approach. In describing this initiative to us, FMS officials explained that by separating the two initiatives they expect to be able to review and update the policy on reimbursement more quickly. They expect to complete the review of the reimbursement policy by April 2010. The officials also noted that enforcement of the policy may need to be phased-in over time as agencies will need to ensure that their appropriations allow reimbursement to FMS for certain collection services. In response to these comments, we revised the recommendation to clarify that the reimbursement criteria need not be developed as part of the Holistic Approach plan, only that, to consistently enforce FMS policies, the criteria should be applied as part of the reviews. The Department of the Interior concurred with our findings and recommendation and stated that the report accurately depicts their efforts to implement electronic collection methods. The Department of Commerce also agreed with our recommendation and expressed its commitment to implementing the recommendation and to working with Treasury to improve the efficiency and effectiveness of collection processes. It also provided technical comments specifically with regard to the USPTO, which we incorporated as appropriate. We are sending copies of this report to the Honorable Timothy F. Geithner, Secretary of the Treasury; the Honorable Ken Salazar, Secretary of the Interior; and the Honorable Gary Locke, Secretary of Commerce. This report is also available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me on (202) 512-6806 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix X. To analyze opportunities to improve the efficiency of federal collections governmentwide we examined (1) the extent to which agencies other than the Internal Revenue Service (IRS) use collection methods in the Financial Management Service’s (FMS) collections program, (2) how FMS and these agencies can maximize the benefits of and overcome barriers to use of the various collection methods, and (3) issues FMS should consider as it implements its plans for improving the efficiency and security of these collections. To assess the extent to which agencies use various collection methods, we analyzed data on governmentwide collections by collection method from FMS’s Total Collections Report for fiscal years 2005 through 2009. We worked with FMS to group collection methods consistently over time and to categorize them as fully electronic, partly electronic, and nonelectronic. In order to focus our analysis on nontax collections, we excluded collections FMS identified as IRS collections. However, according to FMS, some IRS collections may nonetheless be included in the remaining data. We also excluded collections of the Commodity Credit Corporation (CCC)—a federal corporation within the U.S. Department of Agriculture— because, according to FMS officials, CCC does not use the FMS collections program in the same way as other agencies. CCC uses its own network of banks to process collections. We reviewed FMS guidance and interviewed FMS officials to gather operational information on each collection method and understand why use of the various collection methods changed over time. To analyze ways FMS and selected agencies can maximize the benefits of and overcome barriers to the use of collection methods, we conducted case-study reviews of five agencies: the Minerals Management Service (MMS), the National Park Service (NPS), the United States Geological Survey (USGS), the United States Patent and Trademark Office (USPTO), and the National Oceanic and Atmospheric Administration (NOAA). We selected this set of case-study agencies to cover the use of the variety of collection methods, a variety of payer and payment characteristics, programs with significant collection totals, representation of at least two departments, potential for improved efficiency, and instances of a recent change in collection method. These case studies are not intended to be representative and therefore the information gleaned from them cannot be generalized across the government. We used fiscal year 2006 OMB data on fee collections and fiscal year 2008 FMS data on collections by method to identify the case-study agencies. For each case-study agency, we analyzed collections data, interviewed agency officials, and reviewed relevant legislation, regulations, agency guidance, and audit reports. We performed site visits at NPS’s Rocky Mountain National Park in Estes Park, Colorado, and USGS’s Central Region Geospatial Information Office and MMS’s Minerals Revenue Management office both in Denver, Colorado. At each site-visit location, we observed collection processes and interviewed agency officials. We also observed the lockbox process and interviewed bank officials at a U.S. Bank lockbox location in St. Louis, Missouri that provides lockbox services to NOAA and USPTO. We also interviewed representatives or members from four payer organizations—the American Petroleum Institute, the National Tour Association, Computer Packages Inc., and the National Association of State Auditors, Comptrollers and Treasurers—to gain an understanding of the effects of the shift to electronic collections on payers. We selected these organizations from stakeholder or payer organizations suggested by each case-study agency. We also reviewed FMS regulations and guidance, analyzed FMS data related to the costs and benefits of the various collection methods, and interviewed FMS officials. To identify the issues FMS should consider in implementing its plans for improving the efficiency and security of collections, we reviewed relevant legislation and FMS plans and agency agreements, applied relevant findings from our case studies, and interviewed FMS and case-study agency officials. We assessed the reliability of the data we used for this review and determined that it was sufficiently reliable for our purposes. We conducted this performance audit from October 2008 through November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Minerals Management Service (MMS), a bureau in the Department of the Interior, manages the nation’s natural gas, oil, and other mineral resources on the outer continental shelf. The agency also collects, accounts for, and disburses more than $8 billion per year in revenues from federal offshore mineral leases and from onshore mineral leases on federal and Indian lands. MMS’s Minerals Revenue Management collects rents, royalties, and proceeds from lease sales by means of automated clearing house (ACH) transfers, wire transfers, and Federal Reserve Bank deposits. Although most of the payments MMS receives are transmitted electronically, as of October 2008 MMS still received nearly 50,000 checks per year. In fiscal year 2008, rents and royalties totaled $13.4 billion and lease sales totaled $10.2 billion. Royalty payments are typically high-dollar payments, while onshore and offshore rent payments can range from a few hundred to several thousand dollars. MMS’s Offshore Energy and Minerals Management program also collects fees for cost-recovery services and payments, public-information service fees, and linear rental fees, totaling $11.2 million in fiscal year 2008. These collections are solely made through Pay.gov. MMS fee-paying customers are often large companies, but also include smaller organizations and individuals. The National Park Service (NPS) is a bureau of the Department of the Interior. The national park system is comprised of 391 areas covering more than 84 million acres. These areas include national parks, monuments, battlefields, military parks, historical parks, historic sites, lakeshores, seashores, recreation areas, scenic rivers and trails, and the White House. NPS collects funds from the public for recreation fees, special park use permits and fees, transportation fees, and other collections such as payments from concessionaires and commercial use fees and other forms of debt collection. Recreation fees include park entrance fees and special recreation permit fees. Recreation fees range from $5 to $25, or more for large group purchases. Special use fees are charged for the use of park lands or facilities for activities that occur in a park and provide benefit to an individual, group, or organization rather than the public at large. These administrative fees are intended to recover full costs and are calculated on a case-by-case basis, but range from $50 to $50,000. Transportation fees are collected to recover costs associated with an NPS-provided transportation system. In fiscal year 2008 NPS collected approximately $179 million in recreation fees through credit cards, automated clearing house (ACH) transfers, Federal Reserve Bank (FRB) deposits, Treasury General Account (TGA) deposits, and Pay.gov. Fiscal year 2008 special park use fees totaled $12,938,317 and were collected predominantly through TGA deposits and some credit card collections. Transportation fees totaled $13,883,451 in fiscal year 2008. NPS fee-paying customers include individuals, tour operators, concessionaires, and commercial operators. The U.S. Geological Survey (USGS), a Department of the Interior agency, is the nation’s largest water, earth, and biological science and civilian mapping agency. The USGS collects, monitors, analyzes, and provides scientific understanding about natural resource conditions, issues, and problems. USGS collects funds for products sales (mostly relating to mapping products), cooperative water program agreements, and fully-reimbursable programs. Under the cooperative water program, cooperators partner with USGS and reimburse USGS for a portion of the costs of specific USGS data-collection activities, investigations, or studies. Payments under these agreements vary, with 40 percent of agreements yielding $25,000 per year or less and the largest agreement falling between $2 million and $3 million. These funds make up the majority of USGS collections and are mostly collected by lockbox electronic check processing (ECP). Other collections, including those for product sales and reimbursable agreements, are made via agency paper check conversion (PCC), Treasury General Account (TGA) deposits, International Treasury General Account (ITGA) deposits, credit cards, and wire transfers. Fiscal year 2008 collections totaled $217 million, with the majority of this amount ($215 million) stemming from reimbursables. Reimbursables include reimbursements from nonfederal sources such as states, tribes, and municipalities for cooperative efforts and proceeds from the sale of photographs and record copies; reimbursements for permits and licenses of the Federal Energy Regulatory Commission; and reimbursements from foreign countries and international organizations for technical assistance. The U.S. Patent and Trademark Office (USPTO) is an agency of the Department of Commerce. The agency’s main functions are the examination and issuance of patents and the examination and registration of trademarks. USPTO receives its funding through fees that are paid to obtain and renew patents and trademarks. Patent fees, for activities such as application filing, maintenance, and patent extensions, totaled $1.6 billion in fiscal year 2008. These fees can range from $3 for a patent copy to over $8,000 for other services such as inter partes reexamination. Patent fee payments are accepted through credit cards, Pay.gov accepting credit cards or automated clearing house (ACH) transfers, lockbox electronic check processing (ECP) and Treasury General Account (TGA) deposits. Trademark fees, charged for services such as trademark processing and services, totaled $236 million in fiscal year 2008. Trademark fees range from $3 to $400 and are paid through credit cards, Pay.gov accepting credit cards or ACH transfers, and TGA deposits. USPTO also accepts replenishments to deposit accounts through wire transfers, lockbox ECP, and TGA deposits. Payments for some fees from foreign sources are sent through wire transfers. Payers for patent and trademark fees are individuals, attorneys, law firms, small businesses, nonprofits, and large corporations. Patent fees are also paid by annuity companies. The National Oceanic and Atmospheric Administration (NOAA) is a science-based federal agency within the Department of Commerce with regulatory, operational, and information-service responsibilities. NOAA’s mission is to understand and predict changes in the earth’s environment and to conserve, protect, and manage coastal, marine, and Great Lakes’ resources to meet our nation’s economic, social, and environmental needs. NOAA offices include the National Environmental Satellite, Data, and Information Service (NESDIS), the National Marine Fisheries Service (NMFS), the National Ocean Service, National Weather Service, the Office of Marine and Aviation Operations, the Office of Oceanic and Atmospheric Research, and the Office of Program Planning and Integration. NOAA offices receive collections for various programs using a wide range of collection methods. In fiscal year 2008, NESDIS received $1.7 million in revenue for its sales of data. These collections were made through a combination of lockbox electronic check processing (ECP), Pay.gov, lockbox general, and Treasury General Account (TGA) deposits. Pay.gov processed the largest amount of these collections at $1.2 million. NMFS receives fees for a variety of permits, penalties, and inspections, totaling over $48 million in fiscal year 2008. Seafood inspection fees, for example, averaged over $23,000 per fee collected and were collected by means of wire transfer, Federal Reserve Bank deposits, lockbox ECP, Pay.gov, lockbox general, and TGA deposits. Other NMFS collections averaged from $26 for tuna permits to over $6,000 for civil monetary penalties. Other NOAA collections include payments to the Damage Assessment and Restoration Revolving Fund, loan and buy back payments, and other reimbursables. Payer groups for NOAA fees vary, but include private and corporate customers for NESDIS data sales as well as individual fishermen and fishing companies for NMFS permit fees. In addition to the individual listed above, Carol Henn, Assistant Director; Mallory Bulman; Susan Etzel; Katherine Hamer; and Elizabeth Hosler made significant contributions to this report. Charles Fox, Cody Goebel, Christine Houle, Paul Kinney, Felicia Lopez, Julie Matta, Anu Mittal, Donna Miller, Robin Nazzaro, Jacqueline M. Nowicki, Melanie Papasian, Sheila Rajabiun, Thomas Short, and Jay Smale also made key contributions to this report. | The Department of the Treasury's Financial Management Service (FMS) collections program provides services to agencies to collect, deposit, and account for collections through a variety of methods. Electronic collection methods can reduce government borrowing costs and agency administrative costs, while improving compliance and security. The Government Accountability Office (GAO) was asked to identify (1) the extent to which agencies other than the Internal Revenue Service (IRS) use various collection methods, (2) ways to maximize the benefits of and overcome any barriers to agency use of the various collection methods, and (3) issues that FMS should consider in its plans to improve the efficiency and security of collections. GAO analyzed collections data, plans, and documents from FMS and five case-study agencies in the Departments of the Interior and Commerce that use a variety of collection methods, observed fee collection methods, and interviewed FMS and case-study agency officials. GAO also interviewed selected payer groups for case study agencies. Over the past 5 years, more than 80 percent of funds collected by agencies other than the Internal Revenue Service (IRS) were collected using fully electronic methods, including wire transfers and credit cards. As shown in the figure below, from fiscal year 2005 through 2009 there was a significant shift from nonelectronic collection methods to partly electronic methods. This shift was largely a result of a growth in electronic check-processing capacity. Moving to electronic collection methods can reduce costs and mitigate risks, such as theft, but the specific circumstances of individual agencies and payers have affected agencies' ability to fully adopt these methods. Use of electronic methods can result in cost savings, increased processing speed and accuracy, and improved security of staff and deposits. Specifically, FMS reports that on average the government saves 78 cents for each electronic transaction. Additionally, case-study agencies and payer groups GAO spoke with reported reduced costs when using electronic collection methods. Despite the advantages, payer characteristics, other agency considerations, and set-up costs or required system changes have limited agencies' adoption of electronic collection methods. Also, agencies may not have enough information to make cost-effective decisions about their choice of collection method. FMS is implementing a plan to improve the efficiency and effectiveness of federal collections, but the plan excludes important cost considerations and does not use all available incentives. Specifically, the plan does not consider the cost differences among different electronic methods or ensure the consistent application of policies on reimbursement for certain services. The FMS plan also does not include a strategy for incorporating key lessons-learned from agency reviews into its guidance and communicating that information to agencies. With such information, agencies not scheduled for review until later years could begin to transition to more efficient methods. |
On October 25, 1995, Americans were reminded of the dangers that drivers/passengers often face when they travel over railroad crossings in the United States. On that day, in Fox River Grove, Illinois, seven high school students were killed when a commuter train hit a school bus. The potential for tragedies like the one at Fox River Grove is significant—the United States has over 168,000 public highway-railroad intersections. The types of warning for motorists at these crossings range from no visible devices to active devices, such as lights and gates. About 60 percent of all public crossings in the United States have only passive warning devices—typically, highway signs known as crossbucks. In 1994, this exposure resulted in motor vehicle accidents at crossings that killed 501 people and injured 1,764 others. Many of these deaths should have been avoided, since nearly one-half occurred at crossings where flashing lights and descended gates had warned motorists of the approaching danger. In August 1995, we issued a comprehensive report on safety at railroad crossings. We reported that the federal investment in improving railroad crossing safety had noticeably reduced the number of deaths and injuries. Since the Rail-Highway Crossing Program—also known as the section 130 program—was established in 1974, the federal government has distributed about $5.5 billion (in 1996 constant dollars) to the states for railroad crossing improvements. This two-decade investment, combined with a reduction in the total number of crossings since 1974, has significantly lowered the accident and fatality rates—by 61 percent and 34 percent, respectively. However, most of this progress occurred during the first decade, and since 1985, the number of deaths has fluctuated between 466 and 682 each year (see app. 1). Since 1977, the federal funding for railroad crossing improvements has also declined in real terms. Consequently, the question for future railroad crossing safety initiatives will be how best to target available resources to the most cost-effective approaches. Our report discussed several strategies for targeting limited resources to address railroad crossing safety problems. The first strategy is to review DOT’s current method of apportioning section 130 funds to the states. Our analysis of the 1995 section 130 apportionments found anomalies among the states in terms of how much funding they received in proportion to three key risk factors: accidents, fatalities, and total crossings. For example, California received 6.9 percent of the section 130 funds in 1995, but it had only 4.8 percent of the nation’s railroad crossings, 5.3 percent of the fatalities, and 3.9 percent of the accidents. Senators Lugar and Coats have proposed legislation to change the formula for allocating section 130 funds by linking the amounts of funding directly to the numbers of railroad crossings, fatalities, and accidents. Currently, section 130 funds are apportioned to each state as a 10-percent set-aside of its Surface Transportation Program funds. The second means of targeting railroad crossing safety resources is to focus the available dollars on the strategies that have proved most effective in preventing accidents. These strategies include closing more crossings, using innovative technologies at dangerous crossings, and emphasizing education and enforcement. Clearly, the most effective way to improve railroad crossing safety is to close more crossings. The Secretary of Transportation has restated FRA’s goal of closing 25 percent of the nation’s railroad crossings, since many are unnecessary or redundant. For example, in 1994, the American Association of State Highway and Transportation Officials found that the nation had two railroad crossings for every mile of track and that in heavily congested areas, the average approached 10 crossings for every mile. However, local opposition and localities’ unwillingness to provide a required 10-percent match in funds have made it difficult for the states to close as many crossings as they would like. When closing is not possible, the next alternative is to install traditional lights and gates. However, lights and gates provide only a warning, not positive protection at a crossing. Hence, new technologies such as four-quadrant gates with vehicle detectors, although costing about $1 million per crossing, may be justified when accidents persist at signalled crossings. The Congress has funded research to develop innovative technologies for improving railroad crossing safety. Although installing lights and gates can help to prevent accidents and fatalities, it will not preclude motorists from disregarding warning signals and driving around descended gates. Many states, particularly those with many railroad crossings, face a dilemma. While 35 percent of the railroad crossings in the United States have active warning devices, 50 percent of all crossing fatalities occurred at these locations. To modify drivers’ behavior, DOT and the states are developing education and enforcement strategies. For example, Ohio—a state with an active education and enforcement program—cut the number of accidents at crossings with active warning devices from 377 in 1978 to 93 in 1993—a 75-percent reduction. Ohio has used mock train crashes as educational tools and has aggressively issued tickets to motorists going around descended crossing gates. In addition, DOT has inaugurated a safety campaign entitled “Always Expect a Train,” while Operation Lifesaver, Inc., provides support and referral services for state safety programs. DOT’s educational initiatives are part of a larger plan to improve railroad crossing safety. In June 1994, DOT issued a Grade Crossing Action Plan, and in October 1995, it established a Grade Crossing Safety Task Force. The action plan set a national goal of reducing the number of accidents and fatalities by 50 percent from 1994 to 2004. As we noted in our report, whether DOT attains the plan’s goal will depend, in large part, on how well it coordinates the efforts of the states and railroads, whose contributions to implementing many of the proposals are critical. DOT does not have the authority to direct the states to implement many of the plan’s proposals, regardless of how important they are to achieving DOT’s goal. Therefore, DOT must rely on either persuading the states that implementation is in their best interests or providing them with incentives for implementation. In addition, the success of five of the plan’s proposals depends on whether DOT can obtain the required congressional approval to use existing funds in ways that are not allowable under current law. The five proposals would (1) change the method used to apportion section 130 funds to the states, (2) use Surface Transportation Program funds to pay local governments a bonus to close crossings, (3) eliminate the requirement for localities to match a portion of the costs associated with closing crossings, (4) establish a $15 million program to encourage the states to improve rail corridors, and (5) use Surface Transportation Program funds to increase federal funding for Operation Lifesaver. Finally, the action plan’s proposals will cost more money. Secretary Pena has announced a long-term goal of eliminating 2,250 crossings where the National Highway System intersects Principal Rail Lines. Both systems are vital to the nation’s interstate commerce, and closing these crossings is generally not feasible. The alternative is to construct a grade separation—an overpass or underpass. This initiative alone could cost between $4.5 billion and $11.3 billion—a major infrastructure investment. DOT established the Grade Crossing Safety Task Force in the aftermath of the Fox River Grove accident, intending to conduct a comprehensive national review of highway-railroad crossing design and construction measures. On March 1, 1996, the task force reported to the Secretary that “improved highway-rail grade crossing safety depends upon better cooperation, communication, and education among responsible parties if accidents and fatalities are to be reduced significantly.” The report provided 24 proposals for five problem areas it reviewed: (1) highway traffic signals that are supposed to be triggered by oncoming trains; (2) roadways where insufficient space is allotted for vehicles to stop between a road intersection and nearby railroad tracks; (3) junctions where railroad tracks are elevated above the surface of the roadway, exposing vehicles to the risk of getting hung on the tracks; (4) light rail transit crossings without standards for their design, warning devices, or traffic control measures; and (5) intersections where slowly moving vehicles, such as farm equipment, frequently cross the tracks. Under the Federal Railroad Safety Act of 1970, as amended, FRA is responsible for regulating all aspects of railroad safety. FRA’s safety mission includes 1) establishing federal rail safety rules and standards; 2) inspecting railroads’ track, signals, equipment, and operating practices; and 3) enforcing federal safety rules and standards. The railroads are primarily responsible for inspecting their own equipment and facilities to ensure compliance with federal safety regulations, while FRA monitors the railroads’ actions. We have issued many reports identifying weaknesses in FRA’s railroad safety inspection and enforcement programs. For example, in July 1990, we reported on FRA’s progress in meeting the requirements, set forth in the Federal Railroad Safety Authorization Act of 1980, that FRA submit to the Congress a system safety plan to carry out railroad safety laws. The act directed FRA to (1) develop an inspection methodology that considered carriers’ safety records, the location of population centers, and the volume and type of traffic using the track and (2) give priority to inspections of track and equipment used to transport passengers and hazardous materials. The House report accompanying the 1980 act stated that FRA should target safety inspections to high-risk track—track with a high incidence of accidents and injuries, located in populous urban areas, carrying passengers, or transporting hazardous materials. In our 1990 report, we found that the inspection plan that FRA had developed did not include data on passenger and hazardous materials routes—two important risk factors. In an earlier report, issued in April 1989, we noted problems with another risk factor—accidents and injuries. We found that the railroads had substantially underreported and inaccurately reported the number of accidents and injuries and their associated costs. As a result, FRA could not integrate inspection, accident, and injury data in its inspection plan to target high-risk locations. In our 1994 report on FRA’s track safety inspection program, we found that FRA had improved its track inspection program and that its strategy for correcting the weaknesses we had previously identified was sound. However, we pointed out that FRA still faced challenges stemming from these weaknesses. First, it had not obtained and incorporated into its inspection plan site-specific data on two critical risk factors—the volume of passenger and hazardous materials traffic. Second, it had not improved the reliability of another critical risk factor—the rail carriers’ reporting of accidents and injuries nationwide. FRA published a notice of proposed rulemaking in August 1994 on methods to improve rail carriers’ reporting. In February 1996, FRA reported that it intended to issue a final rule in June 1996. To overcome these problems, we recommended that FRA focus on improving and gathering reliable data to establish rail safety goals. We specifically recommended that FRA establish a pilot program in one FRA region to gather data on the volume of passenger and hazardous materials traffic and correct the deficiencies in its accident/injury database. We recommended a pilot program in one FRA region, rather than a nationwide program, because FRA had expressed concern that a nationwide program would be too expensive. The House and Senate Appropriations Conference Committee echoed our concerns in its fiscal year 1995 report and directed the agency to report to the Committees by March 1995 on how it intended to implement our recommendations. In its August 1995 response to the Committees, FRA indicated that the pilot program was not necessary, but it was taking actions to correct the deficiencies in the railroad accident/injury database. For example, FRA had allowed the railroads to update the database using magnetic media and audited the reporting procedures of all the large railroads. We also identified in our 1994 report an emerging traffic safety problem—the industry’s excessive labeling of track as exempt from federal safety standards. Since 1982, federal track safety standards have not applied to about 12,000 miles of track designated by the industry as “excepted;” travel on such track is limited to 10 miles per hour, no passenger service is allowed, and no train may carry more than five cars containing hazardous materials. We found in our 1994 report that the number of accidents on excepted track had increased from 22 in 1988 to 65 in 1992—a 195-percent increase. Similarly, the number of track defects cited in FRA inspections increased from 3,229 in 1988 to 6,057 in 1992. However, with few exceptions, FRA cannot compel railroads to correct these defects. According to FRA, the railroads have applied the excepted track provision far more extensively than envisioned. For example, railroads have transported hazardous materials through residential areas on excepted track or intentionally designated track as excepted to avoid having to comply with minimum safety regulations. In November 1992, FRA announced a review of the excepted track provision with the intent of making changes. FRA viewed the regulations as inadequate because its inspectors could not write violations for excepted track and railroads were not required to correct defects on excepted track. FRA stated that changes to the excepted track provision would occur as part of its rulemaking revising all track safety standards. In February 1996, FRA reported that the task of revising track safety regulations would be taken up by FRA’s Railroad Safety Advisory Committee. FRA noted that this committee would begin its work in April 1996 but did not specify a date for completing the final rulemaking. The Congress had originally directed FRA to complete its rulemaking revising track safety standards by September 1994. In September 1993, we issued a report examining whether Amtrak had effective procedures for inspecting, repairing, and maintaining its passenger cars to ensure their safe operation and whether FRA had provided adequate oversight to ensure the safety of passenger cars. We found that Amtrak had not consistently implemented its inspection and preventive maintenance programs and did not have clear criteria for determining when a passenger car should be removed from service for safety reasons. In addition, we found that Amtrak had disregarded some standards when parts were not available or there was insufficient time for repairs. For example, we observed that cars were routinely released for service without emergency equipment, such as fire extinguishers. As we recommended, Amtrak established a safety standard that identified a minimum threshold below which a passenger car may not be operated, and it implemented procedures to ensure that a car will not be operated unless it meets this safety standard. In reviewing FRA’s oversight of passenger car safety (for both Amtrak and commuter rail), we found that FRA had established few applicable regulations. As a result, its inspectors provided little oversight in this important safety area. For more than 20 years, the National Transportation Safety Board has recommended on numerous occasions that FRA expand its regulations for passenger cars, but FRA has not done so. As far back as 1984, FRA told the Congress that it planned to study the need for standards governing the condition of safety-critical passenger car components. Between 1990 and 1994, train accidents on passenger rail lines ranged between 127 and 179 accidents each year (see app. 2). In our 1993 report, we maintained that FRA’s approach to overseeing passenger car safety was not adequate to ensure the safety of the over 330 million passengers who ride commuter railroads annually. We recommended that the Secretary of Transportation direct the FRA Administrator to study the need for establishing minimum criteria for the condition of safety-critical components on passenger cars. We noted that the Secretary should direct the FRA Administrator to establish any regulations for passenger car components that the study shows to be advisable, taking into account any internal safety standards developed by Amtrak or others that pertain to passenger car components. However, FRA officials told us at the time that the agency could not initiate the study because of limited resources. Subsequently, the Swift Rail Development Act of 1994 required FRA to issue initial passenger safety standards within 3 years of the act’s enactment and complete standards within 5 years. In 1995, FRA referred the issue to its Passenger Equipment Safety Working Group consisting of representatives from passenger railroads, operating employee organizations, mechanical employee organizations, and rail passengers. The working group held its first meeting in June 1995. An advance notice of proposed rulemaking is expected in early 1996, and final regulations are to be issued in November 1999. Given the recent rail accidents, FRA could consider developing standards for such safety-critical components as emergency windows and doors and safety belts as well as the overall crashworthiness of passenger cars. In conclusion, safety at highway-railroad crossings, the adequacy of track safety inspections and enforcement, and the safety of passenger cars operated by commuter railroads and Amtrak will remain important issues for Congress, FRA, the states, and the industry to address as the nation continues its efforts to prevent rail-related accidents and fatalities. Note 1: Analysis includes data from Amtrak, Long Island Rail Road, Metra (Chicago), Metro-North (New York), Metrolink (Los Angeles), New Jersey Transit, Northern Indiana, Port Authority Trans-Hudson (New York), Southeastern Pennsylvania Transportation Authority and Tri-Rail (Florida). Note 2: Data for Amtrak include statistics from several commuter railroads, including Caltrain (California), Conn DOT, Maryland Area Rail Commuter (excluding those operated by CSX), Massachusetts Bay Transportation Authority, and Virginia Railway Express. Railroad Safety: FRA Needs to Correct Deficiencies in Reporting Injuries and Accidents (GAO/RCED-89-109, Apr.5,1989). Railroad Safety: DOT Should Better Manage Its Hazardous Materials Inspection Program (GAO/RCED-90-43, Nov.17, 1989). Railroad Safety: More FRA Oversight Needed to Ensure Rail Safety in Region 2 (GAO/RCED-90-140, Apr. 27, 1990). Railroad Safety: New Approach Needed for Effective FRA Safety Inspection Program (GAO/RCED-90-194, July 31, 1990). Financial Management: Internal Control Weaknesses in FRA’s Civil Penalty Program (GAO/RCED-91-47, Dec.26, 1990). Railroad Safety: Weaknesses Exist in FRA’s Enforcement Program (GAO/RCED-91-72, Mar.22, 1991). Railroad Safety: Weaknesses in FRA’s Safety Program (GAO/T-RCED-91-32, Apr. 11, 1991). Hazardous Materials: Chemical Spill in the Sacramento River (GAO/T-RCED-91-87, July 31, 1991). Railroad Competitiveness: Federal Laws and Policies Affect Railroad Competitiveness (GAO/RCED-92-16, Nov. 5, 1991) Railroad Safety: Accident Trends and FRA Safety Programs (GAO/T-RCED-92-23, Jan.13, 1992). Railroad Safety: Engineer Work Shift Length and Schedule Variability (GAO/RCED-92-133, Apr. 20, 1992). Amtrak Training: Improvements Needed for Employees Who Inspect and Maintain Rail Equipment (GAO/RCED-93-68, Dec.8, 1992). Amtrak Safety: Amtrak Should Implement Minimum Safety Standards for Passenger Cars (GAO/RCED-93-196, Sep.22, 1993). Railroad Safety: Continued Emphasis Needed for an Effective Track Safety Inspection Program (GAO/RCED-94-56, Apr.22, 1994). Amtrak’s Northeast Corridor: Information on the Status and Cost of Needed Improvements (GAO/RCED-95-151BR, Apr. 13, 1995). Railroad Safety: Status of Efforts to Improve Railroad Crossing Safety (GAO/RCED-95-191, Aug. 3, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO provided information on the safety of highway railroad crossings, commuter passenger rails and adequacy of track safety inspections. GAO found that: (1) the leading cause of death associated with the railroad industry involved railroad crossing accidents; (2) about half of rail-related deaths occur because of collisions between trains and vehicles at public railroad crossings; (3) in 1994, 501 people were killed and 1,764 injured in railroad crossing accidents; (4) to improve the safety of railroad crossings, the Department of Transportation (DOT) must better target funds to high-risk areas, close more railroad crossings, install new technologies, and develop educational programs to increase the public's awareness of railroad crossings; (5) DOT plans are costly and will require congressional approval; (6) the Federal Railroad Administration (FRA) is unable to adequately inspect and enforce truck safety standards or direct transportation officials to the routes with the highest accident potential because its database contains inaccurate information; and (7) Congress has directed FRA to establish sufficient passenger car safety standards by 1999. |
The Anti-Drug Abuse Act of 1988 (P.L. 100-690) requires ONDCP to develop a national drug control strategy, in consultation with agency and department heads and others involved in drug control matters. With the President’s approval, this strategy is submitted annually to the Congress. In addition to long- and short-term objectives, it contains information on past and estimated future federal funding in support of efforts to reduce drug supply and demand. Each year since 1983, state alcohol and drug agencies have voluntarily submitted data detailing the fiscal, client, and other aspects of their substance abuse programs to NASADAD. The state, county, and local governments’ expenditure data, including federal support, are analyzed and published by NASADAD under a contract with SAMHSA of HHS. Also, SAMHSA collects data on private funding for substance abuse treatment services through its survey of drug and alcohol treatment units. The primary source of our information on contributions from private and community foundations has been the Foundation Center. The Foundation Center, established in 1956, is an independent, nonprofit service organization. Its mission is to foster public understanding of institutional philanthropy by collecting, organizing, analyzing, and disseminating information on foundations, corporate giving, and other topics. The federal government provides a large portion of the financial support for substance abuse treatment and prevention activities. For fiscal year 1994, federal budget authority for treatment and prevention activities was $4.4 billion—a 59-percent increase over the 1990 amount. When adjusted for inflation, this equates to a 41.3-percent increase from 1990 through 1994. Three departments—HHS, VA, and Education—accounted for the vast majority of the 1994 budget authority. The substance abuse programs that federal agencies fund provide a variety of services; however, treatment programs received a much larger proportion of funding than prevention programs in 1994. It should be noted that the data we obtained may not accurately represent total federal support for treatment and prevention because some programs may have been omitted and much of the data has not been independently validated. Federal funding for substance abuse treatment and prevention activities increased by $1.6 billion from fiscal year 1990 through 1994. During this time period, the number of federal agencies that reported funding for treatment and prevention programs rose from 12 to 16. Federal budget authority for fiscal year 1990 was $2.8 billion, but by fiscal year 1994 the funding amount had reached $4.4 billion. (See app. II for federal funding by agency for fiscal years 1990 through 1994.) The most recent data released by ONDCP show that fiscal year 1995 budget authority for treatment and prevention activities increased about $250 million over the 1994 amount. (See app. III.) Comparing the funding in fiscal years 1990 and 1994, changes in the budget authority for substance abuse treatment and prevention activities varied widely among federal agencies. The largest dollar increase occurred in HHS’ budget, where budget authority increased by about $800 million, from $1.4 billion to $2.2 billion. This change accounted for about one-half of the total increase in federal funding over the 5-year period. Some of the growth can be attributed to the creation of substance abuse block grants and increased reimbursement for treatment services through Medicare and Medicaid. The changes in funding for substance abuse treatment and prevention services among all 16 agencies ranged from a 196-percent increase to about a 27-percent decrease. The Department of Housing and Urban Development (HUD) had the highest percentage increase in funding. Its budget authority went from $106.5 million to $315 million—the bulk of which appeared to be for increases in drug elimination grants that fund drug prevention and control at public and Native American housing developments. The Department of Justice had the highest percentage decrease. Its budget authority declined from $133 million to about $98 million. Although some offices within Justice experienced increases in their budget authority, the Office of Justice Programs’ $50 million decrease resulted in Justice’s overall decline in funding for substance abuse treatment and prevention activities. Of the 16 agencies, 3 departments accounted for most of the federal funds that were available for substance abuse treatment and prevention activities in fiscal year 1994. The combined budget authority of HHS, Education, and VA was about $3.68 billion, or 83 percent of the total federal funding for substance abuse treatment and prevention activities for that year. HHS alone, which has the largest number of agencies with substance abuse treatment and prevention programs, accounted for about half of the fiscal year 1994 budget authority. SAMHSA, within HHS, provided more federal funding for substance abuse treatment and prevention activities than any other agency. SAMHSA’s fiscal year 1994 budget authority was about $1.4 billion. The National Institutes of Health (NIH), also within HHS, provided the next highest level of funding. Its fiscal year 1994 budget authority was $425.2 million. Figure 1 shows fiscal year 1994 budget authority for substance abuse treatment and prevention activities by agency. Substance abuse treatment services received a larger proportion of federal budget authority than prevention services in fiscal year 1994. Treatment services accounted for $2.6 billion, or about 60 percent of the total federal funding available. (See fig. 2.) Federal agencies’ programs provide an array of substance abuse treatment services and prevention activities to a variety of targeted population groups. Treatment comprises an assortment of formal organized services for people who have abused alcohol, other drugs, or both. Treatment services can include diagnostic assessment; detoxification; and medical, psychiatric, and psychological counseling. Prevention activities focus on individuals who may be at risk for alcohol or other drug problems. These activities include providing information and education that increase knowledge of drug abuse and alternative drug-free life styles, encouraging communities to implement responses to drug use, and drug testing. One federal program that provides both treatment and prevention services is Head Start, which offers prevention activities for young children and supporting community-based activities for parents and other family members. Another example is the Pregnant and Postpartum Women and Infants program. In part, it funds demonstration programs that coordinate and link health promotion and treatment services for substance-using pregnant women and their young children. The program also supports treatment services in residential settings that permit infants and children to live with their substance-using mothers. Other programs also provide services for specific populations, such as high-risk youth; elementary, secondary, and postsecondary students; and veterans. Some agencies fund programs whose primary objective is to provide substance abuse treatment and prevention activities. Other agencies’ programs include these activities as one component of a nonsubstance abuse program. For example, the main objective of the Department of Agriculture’s Special Supplemental Program for Women, Infants, and Children (WIC) program is to provide nutritious food and nutrition education to women and children who are considered to be at nutritional risk. As part of nutrition education, WIC counsels participants about the dangers of substance abuse. Program participants are also referred to substance abuse counseling, when appropriate. Appendix IV contains federal agencies’ funding levels for substance abuse treatment and prevention and brief descriptions of the federal programs that provided support for these services for fiscal year 1994. ONDCP was the most comprehensive single source for information on federal substance abuse treatment and prevention funding and programs. However, ONDCP’s budget summary data are limited in their coverage of substance abuse programs and are not routinely subjected to large-scale verification. We observed that ONDCP does not always include alcohol treatment and prevention programs in its budget summaries. For example, no information on NIH’s National Institute on Alcohol Abuse and Alcoholism is included in NIH’s budget authority. Moreover, when we compared ONDCP’s data with federal agencies’ justifications of budget estimates prepared for congressional appropriations committees, the combined funding for three agencies differed by about $655 million in fiscal year 1994. According to ONDCP officials, the differences are due to the inclusion of alcohol-only programs in the agencies’ justification of estimates. ONDCP does not include alcohol-only programs in its budget summary because these programs are not “scored”—that is, categorized—as drug programs. Additionally, VA’s 1996 congressional budget justification did not include VA’s full complement of treatment programs. Data limitations also stem from the use of different methods of estimating the amount of program funding specifically used for substance abuse treatment and prevention and from different determinations of what constitutes a prevention or treatment program. The combined contributions of state, county, and local governments constitute a sizable portion of the financial support for substance abuse treatment and prevention activities. In fiscal year 1994, these entities spent about $1.6 billion—most of which was used for treatment services. This fiscal year 1994 spending exceeded fiscal year 1990 expenditures by about 22 percent (about 8 percent when adjusted for inflation). Users of these data should note that total spending by state and local governments probably exceeds these reported expenditures. In fiscal years 1990 through 1994, state, county, and local governments’ total expenditures increased overall for substance abuse treatment and prevention activities. Combined expenditures rose from $1.3 billion to about $1.6 billion—about a $300 million increase. (App. V shows state, county, and local governments’ annual expenditures and the percentage change from fiscal year 1990 through 1994.) On a percentage basis, there was more fluctuation in local governments’ spending than in state spending over the 5-year period. Also during this period, combined spending for substance abuse treatment consistently exceeded that for prevention. Although total treatment and prevention expenditures increased over the 5 years, spending for prevention actually decreased by about 1 percent while spending for treatment increased by 26 percent (these percentages equate to 12 and 11 percent, respectively, when adjusted for inflation) (see apps. VI and VII). In fiscal year 1994, treatment services accounted for more than 88 percent of total spending by the entities combined (see fig. 3). The expenditure data voluntarily submitted to NASADAD by state and local governments have a number of inherent limitations. One major limitation is that NASADAD asked states to submit expenditure data only for service providers that received at least some portion of their funding from the state alcohol and drug agency during the state’s fiscal year. The data therefore do not include information on providers that did not receive any funding from the state alcohol and drug agency, such as private for-profit agencies. As a result, the overall expenditure data submitted to NASADAD are conservative and probably underestimate total funding expenditures by state governments. Furthermore, state-reported expenditures are not verified by NASADAD; instead, NASADAD asks that states confirm that their data are correct. For some states, complete information is not available on all sources of funding, even for service providers supported by state alcohol and drug agencies. In most of these instances, the amount of unavailable information is probably small. In addition, there are concerns about how consistently providers of treatment and prevention activities classify those activities given the varying interpretations of what constitutes “treatment” and “prevention.” The data are also limited by the variations in state fiscal years, raising questions about the appropriateness of comparing expenditures across states. Comprehensive data on private funding of substance abuse treatment and prevention activities over time are sparse. The National Drug and Alcoholism Treatment Unit Survey (NDATUS), which compiled private contributions from various sources, focused on treatment only. NDATUS data show that private funding for substance abuse treatment services amounted to a little over $1 billion in 1993 (the latest year for which data were available). The largest source of private funding was third-party payments by health insurers and health maintenance organizations (about 55 percent of total private funding). Private donations, which included contributions from foundations, accounted for about 7 percent. (See table 1.) Data on private donations from foundations show that the top 25 contributors awarded $39.4 million in grants for substance abuse treatment and prevention programs during 1993 and 1994 (the latest years for which grant data were available). The grant amounts ranged from $306,342 to about $18.5 million (see app. VIII). These grants were provided to nonprofit organizations in the United States and abroad to cover substance abuse treatment and prevention programs, including counseling, education, residential care facilities, halfway houses, support groups, family services, community programs, and services for children of drug-dependent parents. Grants were also awarded for medical research on substance abuse and media projects on substance abuse prevention. Population groups receiving the largest grant amounts were alcohol or drug abusers, children and youths, women and girls, economically disadvantaged individuals, offenders or ex-offenders, and minorities. The private funding data we used had two significant limitations. First, the latest available NDATUS data on private funding sources were for substance abuse treatment only, and these data were for only 1 year—1993. Second, the response rates of treatment providers to the NDATUS survey were low. The response rates were 21.1 percent for third-party payments, 44.9 percent for client fees, and 15.4 percent for private donations. Federal, state, county, and local governments and the private sector all provide funding for substance abuse treatment and prevention activities. The latest and best data available show that (1) the federal government has been a major contributor of funds, providing more than $4 billion in fiscal year 1994; (2) state and local governments spent a little more than $1.5 billion in their 1994 fiscal years; and (3) private funding exceeded $1 billion in 1993. According to the data we collected, the federal government increased its support for treatment and prevention activities from fiscal year 1990 through the end of fiscal year 1994 by about 60 percent. Over the same 5-year period, state, county, and local governments’ combined funding for treatment and prevention activities increased by about 22 percent. In commenting on a draft of this report, ONDCP concurred with our findings (see app. IX). NASADAD also commented on a draft of this report and agreed with the manner in which we dealt with data it provided on state, county, and local government expenditures. However, NASADAD commented that the changes in state expenditure levels we reported for the 1990 through 1994 time frame were influenced by the time period we chose to review. NASADAD noted that the fiscal year period 1985 through 1989 showed much higher increases in state expenditures. (See app. X.) We are sending copies of this report to the Secretary of Health and Human Services; the Director of the Office of National Drug Control Policy; the Director of the Office of Management and Budget; the Executive Director of the National Association of State Alcohol and Drug Abuse Directors, Inc.; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-7119. Other major contributors to this report include James O. McClyde, Assistant Director; Jared Hermalin; Roy Hogberg; and Brenda James Towe. To determine the level of federal funding and what federal programs exist for substance abuse treatment and prevention activities, we used three data sources: (1) the Office of National Drug Control Policy’s (ONDCP) budget summaries from its National Drug Control Strategies, (2) federal agencies’ justifications of estimates for appropriations committees, and (3) the 1995 Catalog of Federal Domestic Assistance. We also interviewed ONDCP officials. Using the ONDCP budget summaries as our primary data source, we identified federal agencies that fund substance abuse treatment and prevention services and obtained funding data and program descriptions starting in fiscal year 1990. The latest ONDCP budget summary available at the time of our analysis contained actual budget authority for fiscal year 1994, budget estimates for fiscal year 1995, and budget requests for fiscal year 1996. We obtained additional funding data from the National Institute on Alcohol Abuse and Alcoholism (NIAAA) because information on its programs was either not included or not specifically identified in ONDCP’s budget summaries. The agencies’ justifications of estimates were of minimal use because, in most cases, they did not identify substance abuse treatment and prevention funding or provide a description of the programs. Where possible, we compared the justifications with ONDCP’s budget summaries. We reviewed the 1995 Catalog of Federal Domestic Assistance but made only minimal use of its data to fill gaps in program descriptions. Information on state, county, and local governments’ spending specifically for substance abuse treatment and prevention activities was generated by the National Association of State Alcohol and Drug Abuse Directors (NASADAD) from its computerized database. These data covered state fiscal years from 1990 through 1994 and are based on state-reported expenditures that are not verified by NASADAD. Instead, NASADAD requests that states confirm that their annually reported data are correct. We interviewed NASADAD officials and obtained their views on the state-reported data. To obtain information on private sector funding, we contacted the Department of Health and Human Service’s (HHS) Substance Abuse and Mental Health Services Administration (SAMHSA) and numerous other organizations. These groups included the National Association of Addiction Treatment Providers, the Health Insurance Association of America, the American Hospital Association, the National Association of Public Hospitals, and the Center for Addiction and Substance Abuse at Columbia University. However, only SAMHSA provided private funding data for multiple sources. These data were collected in the National Drug and Alcoholism Treatment Unit Survey (NDATUS) of treatment providers and covered funding for treatment services in 1993—the latest year for which data were available. Published data on the private foundations that provided the most funding in grants to nonprofit organizations during 1993 and 1994 were obtained from the Foundation Center. We did not identify any data sources with comprehensive information on private funding of substance abuse prevention activities. We did not verify the federal, state, and private data. The funding data we provide in this report have generally not been adjusted for inflation. In some instances, we did adjust for inflation when presenting changes in funding over time. We conducted our work from February through August 1996 in accordance with generally accepted government auditing standards. Administration for Children and Families Centers for Disease Control and Prevention Health Resources and Services Administration Alcohol, Drug Abuse, and Mental Health Administration (continued) Special Supplemental Program for Women, Infants, and Children (WIC) Bureau of International Narcotics and Law Enforcement National Highway Traffic Safety Administration Office of Territorial and International Affairs (Table notes on next page) Percentage changes are not presented for agencies’ subunits. Not applicable because there was no funding in 1 or more years. Administration for Children and Families Centers for Disease Control and Prevention Health Resources and Services Administration Special Supplemental Program for Women, Infants, and Children (WIC) (continued) This appendix provides information on the substance abuse treatment and prevention activities of various federal agencies. Included are funding information and program and activity descriptions. Not included are funding and program descriptions for agencies that devoted less than $1 million to treatment and prevention activities in fiscal year 1994. These agencies accounted for $1.8 million or 0.04 percent of total federal budget authority for that year. In some cases table totals do not add because of rounding. Through the Veterans Health Administration, VA operates a network of substance abuse treatment programs in its medical centers, domiciliaries, and outpatient clinics. Office of Elementary and Secondary Education Drug-Free Schools and Communities Act National Programs (including regional centers) Safe and Drug-Free Schools and Communities Act Family and Community Endeavor Schools (FACES) Office of Special Education and Rehabilitative Services, Rehabilitative Services Administration Office of Special Education Programs Grants for Infants and Families Special Education Special Purpose Funds National Institute on Disability and Rehabilitation Research Rehabilitation Research Training Centers and other programs The Drug-Free School and Communities Act expired at the end of fiscal year 1994; its authorization was extended under the Safe and Drug-Free Schools and Communities Act. Program not in existence or program restructured into another program. The Safe and Drug-Free Schools and Communities Act extends the authorization for the Drug-Free Schools and Communities Act (which expired on Sept. 30, 1994) and broadens it to include activities to prevent violence as well as drug and alcohol use by youth. In 1994, the funds were used exclusively for alcohol, tobacco, and other drug-related prevention activities. In 1994, 90 percent of these funds were used to support grants to local educational agencies (LEA) with serious school crime, violence, and discipline problems. The projects are designed to combat those problems and thereby enhance school safety and promote better access to learning. The remaining funds were divided equally between national leadership activities and support for a national model city program in the District of Columbia, as authorized by the legislation. (Funding for this program is included in the national drug control budget because activities supported with these funds will have an impact on drug prevention as well as on violence prevention.) Family and Community Endeavor Schools (FACES), a subset of the Crime Control Act, supports grants to LEAs and community-based organizations in high-poverty and high-crime areas for programs of integrated services to improve the academic and social development of at-risk students. (Funding for this program is included in the national drug control budget because activities supported with these funds will have an impact on drug prevention as well as on violence prevention.) This state grant program supports a wide range of services for individuals with disabilities, including those whose disabling condition is due to drug abuse, to prepare for and engage in gainful employment. Funds are allocated to states and territories on the basis of their population and per capita income. People with disabilities that result in a substantial impediment are eligible for assistance. Funds also support special demonstration programs that develop innovative methods and comprehensive service programs to help people with disabilities achieve satisfactory vocational outcomes. Special demonstration programs develop innovative methods and comprehensive service programs to help people with disabilities achieve satisfactory vocational outcomes. The program awards discretionary grants to states, agencies, and organizations to pay all or part of the costs of demonstrations, direct services, and related activities. This state grant program supports development and implementation of statewide systems of early intervention for children up to 2 years old with disabilities. No specific information related to drug abuse intervention was provided in the ONDCP 1995 budget summary. These funds support grants, contracts, and cooperative agreements with public agencies; private nonprofit organizations; and, in some cases, for-profit organizations. Activities include research, demonstrations, outreach, training, and technical assistance. No specific information related to drug abuse intervention was provided in the ONDCP 1995 budget summary. Through various discretionary programs, the Institute supports research, demonstrations, and dissemination activities on issues relating to people of all ages with disabilities. No specific information related to drug abuse was provided in the ONDCP 1995 budget summary. Program administration maintains Department of Education staff to administer programs with substance abuse treatment and prevention components. Program not in existence or program restructured into another program. Drug Elimination Grants/Community Partnership Against Crime Empowerment Zones and Enterprise Communities Crime Control Act (Local Partnership Act) Program not in existence or program restructured into another program. Through this program, HUD provides grants to public housing authorities and Indian housing agencies to fight drug problems in their communities. Drug problems are addressed through a comprehensive approach involving enforcement, prevention, and treatment. The grants focus on many areas, including community policing, youth training, recreation, career planning, employment, substance abuse education and prevention; resident services, such as drug treatment or other appropriate social services that address the contributing factors of crime; and clearinghouse services, assessment and evaluation, and technical assistance and training. Funds support programs to empower people and communities to work together to create jobs and opportunity. HUD applies four principles in making the Empowerment Zone and Enterprise Community designations: (1) economic opportunity, (2) sustainable community development, (3) community-based partnerships, and (4) strategic vision for change. No details were provided for this program in the ONDCP 1995 budget summary. The Bureau has a comprehensive drug abuse treatment strategy with four components: drug abuse education, nonresidential drug abuse counseling services, residential drug abuse program, and community-transitional services programming. An estimated 30.5 percent of the sentenced inmate population is drug dependent and requires some type of drug abuse treatment program. Neither the ONDCP budget summary nor the agency’s justification of estimates identified the prevention components. Through formula grant funds, the Bureau provides financial and technical assistance to state and local governments to control drug abuse and violent crime and improve the criminal justice system. States are required to prepare statewide antidrug and violent crime strategies. The Bureau also supports national and multistate programs such as the National Crime Prevention Campaign (McGruff the Crime Dog). The Bureau produces and disseminates drug-related data, including data on drug-use history of criminal offenders; offenders under the influence of alcohol or drugs; drug prosecution and sentencing of drug law violators; case processing of drug offenses; drug availability, prevention, and education classes in schools; drug and alcohol rehabilitation programs in the correctional community; and the relationship of drugs and crime. The Bureau also supports the Drugs and Crime Data Center and Clearinghouse, which provides a centralized source of information on drugs and crime. The Institute is the primary federal sponsor of research on crime and its control and is a central resource for information on innovative approaches in criminal justice. As mandated by the Anti-Drug Abuse Act of 1988, the Institute sponsors and conducts research, evaluates policies and practices, demonstrates promising new approaches, provides training and technical assistance, assesses new technology for criminal justice, and disseminates its findings to state and local practitioners and policymakers. This agency has primary responsibility for addressing the needs of the juvenile justice system. Its goal is to aid in the prevention, reduction, and treatment of juvenile crime and delinquency and to improve the administration of juvenile justice by providing financial and technical support to state and local governments, public and private agencies, organizations, and institutions. Program not in existence or program restructured into another program. Program not in existence or program restructured into another program. The program serves as a vehicle for the administration’s strategy to fight violent crime by increasing the number of state and local police officers; promoting the use of community policing techniques; and implementing police hiring, education, and training programs. The program primarily awards grants to state and local law enforcement agencies, state and local governments, and community groups to achieve its goals. Employment and Training Administration (Job Training Program) The Department of Labor’s Employment and Training Administration administers job training programs, not substance abuse programs. The Administration believes that the positive results of its programs, in terms of enabling participants to acquire new skills and enhance employment ability, contribute to reducing the risk factors associated with substance abuse. The Job Training Partnership Act (JTPA), 29 U.S.C. §1501 et seq., requires individual assessments for each program participant; specifically encourages outreach activities for individuals who face severe barriers to employment, such as drug and alcohol abuse; and sets as program goals coordination of JTPA programs with other community service organizations, such as drug and alcohol abuse prevention and treatment programs. JTPA also authorizes the Job Corps Alcohol and Other Drug Abuse component to screen trainees for drug and alcohol problems and provide prevention and intervention services. This program provides information on workplace substance abuse through continued development and operation of the Substance Abuse Information Database; data collection on the impact of substance abuse on productivity, safety, and health; support for the Substance Abuse Institute at the George Meany Center for Labor Studies; funding of the workplace model in the fiscal year 1996 Household Survey; and continued work with employer and employee groups to raise awareness of the problems of workplace substance abuse and what can be done to most effectively address those problems. The Department of Defense’s counterdrug strategy has among its objectives to reduce the demand for illegal drugs within the Department and its surrounding communities. The demand reduction program supports a counterdrug strategy of early drug abuse identification through testing and treatment of drug abusers and outreach programs for at-risk youth through the military departments and the National Guard Bureau. For community outreach pilot programs, congressional authorization is required to permit counterdrug funds to be spent on programs targeting youth outside the traditional Department community boundaries. The U.S. Courts operate the Substance Abuse Treatment Program. Offenders in this program are referred by the Judiciary and the Bureau of Prisons. The basic goal of the program is to identify and treat substance abusers who are under the supervision of the U.S. Probation Office. The program tries to protect the community by helping these offenders stop their substance abuse. The Corporation for National Service administers programs that address the nation’s education, human service, public safety, and environmental needs through the activities of volunteers and that expand the involvement of volunteers in responding to a wide range of community needs, including drug abuse prevention, by reaching high-risk youth and the communities in which they live. Referral and monitoring (Title XVI) Demonstration projects (Title XVI) Disability Insurance Trust Fund (Title II) Program not in existence or program restructured into another program. The Social Security Administration has placed restrictions on Disability Insurance and Supplemental Security Income benefits payments to individuals disabled by drug addiction or alcoholism and has established barriers to prevent a beneficiary from using benefits to support an addiction. In some cases, the Administration imposes treatment requirements on Disability Insurance beneficiaries and establishes referral and monitoring agreements in all states. Special Supplemental Program for Women, Infants, and Children (WIC) WIC provides nutritious supplemental foods to low-income pregnant, postpartum, and breastfeeding women and to infants and children younger than age 5 who are determined by professionals such as physicians, nurses, and nutritionists to be at nutritional risk. Funds flow through participating state agencies to local agencies, which provide supplemental foods to WIC participants along with nutrition education, breastfeeding promotion, and health care referrals. As part of nutrition education, WIC counsels participants about the dangers of substance abuse, including smoking during pregnancy. When appropriate, participants are referred to drug abuse counseling. The ONDCP budget summary and the budget justification do not identify specific prevention program or activity dollars. The Bureau develops, implements, and monitors U.S. international counternarcotics strategies and programs. The Bureau’s functions also include foreign policy formation and coordination, program management, and diplomatic initiatives. Neither ONDCP nor the agency’s justification of estimates identifies specific prevention components by budget expenditure. However, prevention descriptions are identified within the following FAA program listings. The Federal Aviation Administration (FAA) provides regulatory oversight of the drug and alcohol misuse prevention programs administered by approximately 5,000 aviation industry entities and individual commercial operators. FAA also conducts random drug testing of employees who are designated to be in critical safety positions; reregisters aircraft and conducts periodic renewal of pilot certificates; provides investigative support to all federal, state, and local law enforcement agencies involved in drug enforcement actions; and develops and correlates flight plans and transponder codes to enhance communications between air route traffic control centers and U.S. Customs/Coast Guard facilities. This process assists in identifying airborne drug smugglers by using radar, posting aircraft lookouts, and tracking the movement of suspect aircraft. This funding category supports the postmortem analysis of tissues and fluids from people involved in transportation accidents and incidents and assesses the effects of drugs on the performance of pilot and controller tasks. The office coordinates substance abuse services among rehabilitation centers, emergency shelters, juvenile detention facilities, and community-based prevention and intervention programs. Each Bureau school has a substance abuse prevention program. The schools are allowed flexibility to design the most effective curriculum and counseling services to meet the needs of students. The U.S. Secret Service considers a portion of its costs for full-time- equivalent employees’ pay, benefits, and support to be attributable to drug enforcement activities. These activities include criminal investigations, task force involvement, employee and applicant drug testing, and protection involved in other drug-related activities. (continued) Data were not available. Percentage change could not be computed because data were not available. (continued) Data were not available. Percentage change could not be computed because data were not available. (continued) Data were not available. Percentage change could not be computed because data were not available. The Robert Wood Johnson Foundation Meadows Foundation, Inc. Carnegie Corporation of New York The Aaron Diamond Foundation, Inc. Hartford Foundation for Public Giving The Annie E. Casey Foundation Lettie Pate Evans Foundation, Inc. At-Risk and Delinquent Youth: Multiple Federal Programs Raise Efficiency Questions (GAO/HEHS-96-34, Mar. 6, 1996). Drug Courts: Information on a New Approach to Address Drug-Related Crime (GAO/GGD-95-159BR, May 22, 1995). Social Security: Disability Benefits for Drug Addicts and Alcoholics Are Out of Control (GAO/T-HEHS-94-101, Feb. 10, 1994). Drug Use Among Youth: No Simple Answers to Guide Prevention (GAO/HRD-94-24, Dec. 29, 1993). Indian Health Service: Basic Services Mostly Available; Substance Abuse Problems Need Attention (GAO/HRD-93-48, Apr. 9, 1993). Community Based Drug Prevention: Comprehensive Evaluations of Efforts Are Needed (GAO/GGD-93-75, Mar. 24, 1993). Adolescent Drug Use Prevention: Common Features of Promising Community Programs (GAO/PEMD-92-2, Jan. 16, 1992). ADMS Block Grant: Drug Treatment Services Could Be Improved by New Accountability Program (GAO/HRD-92-27, Oct. 17, 1991). Drug Treatment: State Prisons Face Challenges in Providing Services (GAO/HRD-91-128, Sept. 20, 1991). Drug Treatment: Despite New Strategy, Few Federal Inmates Receive Treatment (GAO/HRD-91-116, Sept. 16, 1991). Substance Abuse Treatment: Medicaid Allows Some Services but Generally Limits Coverage (GAO/HRD-91-92, June 13, 1991). ADMS Block Grant: Women’s Set-Aside Does Not Assure Drug Treatment for Pregnant Women (GAO/HRD-91-80, May 6, 1991). Drug Abuse: The Crack Cocaine Epidemic—Health Consequences and Treatment (GAO/HRD-91-55FS, Jan. 30, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the financial support provided for substance abuse and treatment activities by federal, state, and local governments and the private sector. GAO found that: (1) federal funding for substance abuse treatment and prevention activities increased from $2.8 billion in fiscal year (FY) 1990 to $4.4 billion in FY 1994; (2) the Departments of Health and Human Services, Education, and Veterans Affairs provided 83 percent of total federal funding for treatment and prevention activities for FY 1994; (3) numerous programs in 16 federal agencies covered a broad range of treatment and prevention services and often targeted specific populations; (4) treatment services included diagnostic assessment, detoxification, and counseling, while prevention activities usually included providing information and education about alternatives to and consequences of alcohol abuse and illicit drug use; (5) state, county, and local governments' total expenditures for treatment and prevention activities increased from about $1.3 billion in FY 1990 to about $1.6 billion in FY 1994; and (6) although data on private-sector funding for substance abuse treatment are very limited, available sources indicate funding of more than $1 billion in 1993. |
Alaska encompasses an area of about 365 million acres, more than the combined area of the next three largest states—Texas, California, and Montana. The state is bound on three sides by water, and its coastline, which stretches about 6,600 miles (excluding island shorelines, bays and fjords) and accounts for more than half of the entire U.S. coastline, varies from rocky shores, sandy beaches, and high cliffs to river deltas, mud flats, and barrier islands. The coastline constantly changes due to wave action, ocean currents, storms, and river deposits and is subject to periodic, yet severe, erosion. Alaska also has more than 12,000 rivers, including three of the ten largest in the country—the Yukon, Kuskokwim, and Copper Rivers. (See fig. 1.) While these and other rivers provide food, transportation, and recreation for people, as well as habitat for fish and wildlife, their waters also shape the landscape. In particular, ice jams on rivers and flooding of riverbanks during spring breakup change the contour of valleys, wetlands, and human settlements. Permafrost (permanently frozen subsoil) is found over approximately 80 percent of Alaska. It is deepest and most extensive on the Arctic Coastal Plain and decreases in depth, eventually becoming discontinuous further south. In northern Alaska, where the permafrost is virtually everywhere, most buildings are elevated to minimize the amount of heat transferred to the ground to avoid melting the permafrost. In northern barrier island communities, the permafrost literally helps hold the island together. However, rising temperatures in recent years have led to widespread thawing of the permafrost, causing serious damage. As permafrost melts, buildings and runways sink, bulk fuel tank areas are threatened, and slumping and erosion of land ensue. (See fig. 2.) Rising temperatures have also affected the thickness, extent, and duration of sea ice that forms along the western and northern coasts. The loss of sea ice leaves coasts more vulnerable to waves, storm surges, and erosion. When combined with the thawing of permafrost along the coast, this loss of sea ice poses a serious threat to coastal Alaska Native villages. Furthermore, loss of sea ice alters the habitat and accessibility of many of the marine mammals that Alaska Natives depend upon for subsistence. As the ice melts or moves away early, walruses, seals, and polar bears move with it, taking them too far away to be hunted. Although Alaska is by far the largest state, it is one of the least populated, with about 630,000 people—of which 19 percent, or about 120,000, are Alaska Natives. Over half of the state’s population is concentrated in the Kenai Peninsula, Anchorage, and the Matanuska-Susitna area in south central Alaska. Many Alaska Natives, however, live in places long inhabited by their ancestors in rural areas in western, northern, and interior Alaska. Alaskan Natives are generally divided into six major groupings: Unangan (Aleuts), Alutiiq (Pacific Eskimos), Iñupiat (Northern Eskimos), Yup’ik (Bering Sea Eskimos), Athabascan (Interior Indians), and Tlingit and Haida (Southeast Coastal Indians). For generations, these Alaska Natives have used the surrounding waters and land to hunt, fish, and gather wild plants for food. (See fig. 3.) These subsistence activities are intricately woven into the fabric of their lives. Subsistence activities require a complex network of social relationships within the Native community. For example, there is a division of labor among those who harvest, those who prepare, and those who distribute the food. These activities establish and promote the basic values of Alaska Native culture—generosity, respect for the knowledge and guidance of elders, self-esteem for the successful hunter(s), and community cooperation—and they form the foundation for continuity between generations. As their environment changes along with the climate, however, Alaska Natives have few adaptive strategies, and their traditional way of life is becoming increasingly vulnerable. A typical coastal or river Native village has a population of a couple of hundred people and generally contains only basic infrastructure—homes, a school, a village store, a health clinic, a washateria, a church, city or tribal offices, and a post office. The school is usually the largest building in the community. Since many villages do not have running water, the washateria plays an important role; it not only contains laundry facilities, but also shower and toilet facilities—which residents must pay a fee to use. Many village homes do not have sanitation facilities and rely on honey buckets— 5-gallon buckets that serve as a toilet—or a flush and haul system. Most of the villages that are not accessible by roads contain an airport runway that provides the only year-round access to the community. The runways are generally adjacent to the village or a short distance away. Other infrastructure in a village may consist of a bulk fuel tank farm, a power plant, a water treatment facility, a water tank, meat drying racks, a village sewage lagoon or dump site, and, for some villages, commercial structures such as tanneries. Most river villages also have a barge landing area where goods are delivered to the community during the ice-free period. The government structure of Native villages may contain several distinct entities that perform administrative tasks, including making decisions about how to address flooding and erosion. Alaska’s constitution and state laws allow for several types of regional and local government units, such as boroughs—units of government that are similar to the counties found in many other states. About a third of Alaska is made up of 16 organized boroughs. The remaining two-thirds of the state is sparsely populated land that is considered a single “unorganized borough.” At the village level, a federally recognized tribal government may coexist with a city government, which may also be under a borough government. Alaska has more than 200 federally recognized tribal governments. In addition to these various government entities, federal agencies that provide assistance for flooding and erosion also work with local and regional Native corporations. Federal law directed the establishment of these corporations under the laws of the state of Alaska, and the corporations are organized as for-profit entities that also have nonprofit arms. In December 1971, Congress enacted the Alaska Native Claims Settlement Act (ANCSA), which directed the establishment of 12 for-profit regional corporations—one for each geographic region comprised of Natives having a common heritage and sharing common interests—and over 200 village corporations. These corporations would become the vehicle for distributing land and monetary benefits to Alaska Natives to provide a fair and just settlement of aboriginal land claims in Alaska. The act permitted the conveyance of about 44 million acres of land to Alaska Native corporations, along with cash payments of almost $1 billion. (See appendix II for a list of the regional corporations and the corresponding nonprofit arms that provide social services to the villages and also help them address problems, including flooding and erosion.) Federal, state, and local government agencies share responsibility for controlling and responding to flooding and erosion. The U.S. Army Corps of Engineers has responsibility for planning and constructing streambank and shoreline erosion protection and flood control structures under a specific set of requirements. The Department of Agriculture’s Natural Resources Conservation Service (NRCS) is responsible for protecting small watersheds. A number of other federal agencies, such as the Departments of Transportation and Housing and Urban Development, also have responsibility for protecting certain infrastructure from flooding and erosion. On the state side, the Division of Emergency Services responds to state disaster declarations dealing with flooding and erosion when local communities request assistance. The Alaska Department of Community and Economic Development helps communities reduce losses and damage from flooding and erosion. The Alaska Department of Transportation and Public Facilities funds work to protect runways from erosion. Local governments such as the North Slope Borough have also funded erosion control and flood protection projects. In addition to government agencies, the Denali Commission, created by Congress in 1998, while not directly responsible for responding to flooding and erosion, is charged with addressing crucial needs of rural Alaska communities, particularly isolated Alaska Native villages. The membership of the commission consists of federal and state cochairs and a five-member panel from statewide organization presidents. The mission of the commission is to partner with tribal, federal, state, and local governments to improve the effectiveness and efficiency of government services; to build and ensure the operation and maintenance of Alaska’s basic infrastructure; and to develop a well-trained labor force. The commission funds infrastructure projects throughout the state, ranging from health clinics to bulk fuel tanks. The commission has also funded the construction of new infrastructure when flooding and erosion threatened the existing structures. According to federal and Alaska state officials that we consulted, most of the 213 Alaska Native villages are subject to flooding and erosion. However, it is difficult to assess the severity of the problem because quantifiable data on flooding and erosion are not available for remote locations. Villages located on the coast or along rivers are subject to both annual and episodic flooding and erosion. In addition, river villages are also susceptible to flooding and erosion caused by ice jams, snow and glacial melts, rising sea levels, and heavy rainfall. Flooding and erosion affects 184 out of 213, or 86.4 percent, of Alaska Native villages to some extent, according to studies and information provided to us by federal and Alaska state officials. The 184 affected villages consist of coastal and river villages throughout the state. Figure 4 shows the location of these villages, and table 1 shows the number of affected villages by ANCSA region. All 184 Native villages affected by flooding and erosion are listed in appendix III. Villages on the coast are affected by flooding and erosion from the sea. For example, when these villages are not protected by sea ice, they are at risk of flooding and erosion from storm surges. Lack of sea ice also increases the distance over water, which can generate increased waves and storm surges. In the case of Kivalina, the community has experienced erosion from sea storms, particularly in late summer or fall. These storms can result in a sea level rise of 10 feet or more, and when combined with high tide, the storm surge becomes even greater and can be accompanied by waves that contain ice. In addition to coastal villages, communities in low- lying areas along riverbanks or in river deltas are susceptible to flooding and erosion caused by ice jams, snow and glacial melts, rising sea levels and heavy rainfall. For example, the village of Aniak, on the Kuskokwim River in southwestern Alaska, experiences flooding every 3 or 4 years. Ice jams that form on the river during the spring breakup cause the most frequent and severe floods in Aniak, sometimes accompanied by streambank erosion from the ice flow. (See fig. 5.) Flooding and erosion are long-standing problems in Alaska. For example, these problems have been well documented in Bethel, Unalakleet, and Shishmaref dating back to the 1930s, 1940s, and 1950s, respectively. The state has made several efforts to identify communities affected by flooding and erosion over the past 30 years. In 1982, a state contractor developed a list of Alaska communities affected by flooding and erosion. This list identified 169 of the 213 Alaska Native villages, virtually the same villages identified by federal and state officials that we consulted in 2003. In addition, the state appointed an Erosion Control Task Force in 1983 to investigate and inventory potential erosion problems and to prioritize erosion sites by severity and need. In its January 1984 final report, the task force identified a total of 30 priority communities with erosion problems. Of these 30 communities, 28 are Alaska Native villages. Federal and state officials that we spoke with in 2003 also identified almost all of the Native communities in the 1984 report as villages needing assistance. While flooding and erosion is a long-standing problem that has been documented in Alaska for decades, various studies and reports indicate that coastal villages in Alaska are becoming more susceptible. This increasing susceptibility is due in part to rising temperatures that cause protective shore ice to form later in the year, leaving the villages vulnerable to storms. According to the Alaska Climate Research Center, mean annual temperatures have risen for the period from 1971 to 2000, although changes varied from one climate zone to another and were dependent on the temperature station selected. For example, Barrow experienced an average temperature increase of 4.16 degrees Fahrenheit for the 30-year period from 1971 to 2000, while Bethel experienced an increase of 3.08 degrees Fahrenheit for the same time period. Other studies have reported extensive melting of glaciers, thawing of permafrost, and reduction of sea ice that may also be contributing to the flooding and erosion problems of coastal villages in recent years. According to a 1999 report for the U.S. Global Change Research Program, glaciers in the arctic and subarctic regions have generally receded, with decreases in ice thickness of approximately 33 feet over the last 40 years. In addition, according to a 1997 report of the Intergovernmental Panel on Climate Change, much of the arctic permafrost is close to thawing, making it an area that is sensitive to small changes in temperature. The 1999 report for the U.S. Global Change Research Program also states that both the extent and thickness of sea ice in the arctic have decreased substantially in recent decades, with thickness decreasing by more than 4 feet (from 10- feet to 6-feet thick). The report also notes that loss of sea ice along Alaska’s coast has increased both coastal erosion and vulnerability to storm surges. With less ice, storm surges have become more severe because larger open water areas can generate bigger waves. While most Alaska Native villages are affected to some extent by flooding and erosion, quantifiable data are not available to fully assess the severity of the problem. Federal and Alaska state agency officials could agree on which three or four villages experience the most flooding and erosion, but they could not rank flooding and erosion in the remaining villages by high, medium, or low severity. These agency officials said that determining the extent to which villages have been affected by flooding and erosion is difficult because Alaska has significant data gaps. These gaps occur because remote locations lack monitoring equipment. The officials noted that about 400 to 500 gauging stations would have to be added in Alaska to attain the same level of gauging as in the Pacific Northwest. In addition, the amount and accuracy of floodplain information in Alaska varies widely from place to place. Detailed floodplain studies have been completed for many of the larger communities and for the more populated areas along some rivers. For example, the Federal Emergency Management Agency (FEMA) has published Flood Insurance Rate Maps that show floodplain boundaries and flood elevations for communities that participate in the National Flood Insurance Program. However, because only a handful of Alaska Native villages participate in the program, many of the villages have not had their 100-year floodplain identified by FEMA. In addition, little or no documented floodplain information exists for most of the smaller communities. Moreover, no consolidated record has been maintained of significant floods in Alaska Native villages. The Corps’ Flood Plain Management Services has an ongoing program to identify the 100- year flood elevation, or the flood of record of flood-prone communities through data research and field investigations. State of Alaska officials also noted that there is a lack of standards and terms for measuring erosion. Erosion zone guidance and federal (or state) standards by which to judge erosion risks are needed. They noted that while national standards for designing, developing and siting for the “100- year flood” event exists and are quantifiable and measurable, a similar standard for erosion, such as a distance measurement needs to be established. The key programs that construct projects to prevent and control flooding and erosion are administered by the Corps and NRCS. However, Alaska Native villages have difficulty qualifying for assistance under some of these programs—largely because of program requirements that the economic costs of the project not exceed its economic benefits. In addition to the Corps and NRCS, several other federal and state agencies have programs to provide assistance for specific consequences of flooding and erosion, such as programs to replace homes or to rebuild or repair roads and airstrips. The Continuing Authorities Program, administered by the Corps, and the Watershed Protection and Flood Prevention Program, administered by NRCS, are the principal programs available to prevent flooding and control erosion. Table 2 below lists and describes the five authorities under the Corps’ Continuing Authorities Program that address flooding and erosion, while table 3 identifies the main NRCS programs that provide assistance for flooding and erosion. In addition to the Corps’ Continuing Authorities Program, other Corps authorities that may address problems related to flooding and erosion include the following: Section 22 of the Water Resources Development Act of 1974, which provides authority for the Corps to assist states in the preparation of comprehensive plans for the development, utilization, and conservation of water and related resources of drainage basins. Section 206 of the Flood Control Act of 1960, which allows the Corps’ Flood Plain Management Services’ Program to provide states and local governments technical services and planning guidance that is needed to support effective flood plain management. In addition to these programs, several other federal programs can assist Alaska Native villages in responding to the consequences of flooding by funding tasks such as moving homes, repairing roads, or rebuilding airport runways. Table 4 lists these programs. Small and remote Alaska villages often fail to qualify for assistance under the Corps’ Continuing Authorities Program because they do not meet the program’s criteria. In particular, according to the Corps’ guidelines for evaluating water resource projects, the Corps generally cannot undertake a project whose costs exceed its expected benefits. With few exceptions, Alaska Native villages’ requests for the Corps’ assistance are denied because of the Corps’ determination that project costs outweigh the expected benefits. Alaska Native villages have difficulty meeting the cost/benefit requirement because many of these villages are not developed to the extent that the value of their infrastructure is high enough to equal the cost of a proposed erosion or flood control project. For example, the Alaska Native village of Kongiganak, with a population of about 360 people, experiences severe erosion from the Kongnignanohk River. The Corps decided not to fund an erosion project because the cost of the project exceeds the expected benefits and because many of the structures threatened are private property, which are not eligible for protection under a Section 14 Emergency Streambank Protection project. One additional factor that makes it difficult for Alaska Native villages to qualify for the Corps’ program is that the cost of construction is high in remote villages— largely because labor, equipment, and materials have to be brought in from distant locations. The high cost of construction makes it even more difficult for villages to meet the Corps’ cost/benefit requirements. Even villages that do meet the Corps’ cost/benefit criteria may still fail to receive assistance if they cannot provide or find sufficient funding to meet the cost-share requirements for the project. By law, the Corps generally requires local communities to fund between 25 and 50 percent of project planning and construction costs for flood prevention and erosion control projects. According to village leaders we spoke to, under these cost-share requirements they may need to pay hundreds of thousands of dollars or more to fund their portion of a project—funding that many of them do not have. As shown in table 3, NRCS has three key programs that can provide assistance to villages to protect against flooding and erosion—two of which are less difficult to qualify for than the Corps program. The NRCS programs are the Watershed Protection and Flood Prevention Program, the Emergency Watershed Protection Program, and the Conservation Technical Assistance Program. The purpose of the Watershed Protection and Flood Prevention Program is to assist federal, state, and local agencies and tribal governments in protecting and restoring watersheds from damage caused by erosion, and flooding. Qualifying for funding under the NRCS Watershed Protection and Flood Prevention Program requires a cost/benefit analysis similar to that of the Corps. In fact, according to an NRCS headquarters official, there should be little if any difference in the standards for cost benefit analyses between the Corps and NRCS programs. As a result, few projects for Alaskan Native villages have been funded under this program. In contrast, some villages have been able to qualify for assistance from the Emergency Watershed Protection Program, because for this program NRCS’s policy is different and allows consideration of additional factors in the cost/benefit analysis. Specifically, NRCS considers social or environmental factors when calculating the potential benefits of a proposed project, and protecting the subsistence lifestyle of an Alaska Native village can be included as one of these factors. In addition, NRCS headquarters officials have instructed field staff to “take a second look” at proposed projects in which the potential benefits are nearly equal to the project costs. In some cases, according to NRCS’s National Emergency Watershed Protection Program Leader, there may be unusual circumstances that might make the project worthwhile even if the costs slightly outweigh the benefits. One example provided by this official was for projects that involved protecting Native American burial grounds. Furthermore, while NRCS’s program encourages cost sharing by local communities, this requirement can be waived when the local community cannot afford to pay. Such was the case in Unalakleet, where the community had petitioned federal and state agencies to fund its local cost- share of an erosion protection project and was not successful. Eventually, NRCS waived the cost-share requirement for the village and covered the total cost of the project itself. (See fig. 6.) Another NRCS official in Alaska estimated that about 25 villages have requested assistance under this program during the last 5 years; of these 25 villages, 6 received some assistance from NRCS, and 19 were turned down—mostly because there were either no feasible solutions or because the problems they wished to address were recurring ones. One factor that limits the assistance provided by the program is that it is intended for smaller scale projects than those that might be constructed by the Corps. Moreover, because this program is designed to respond quickly to emergencies, it is limited to addressing one- time events—such as repairing damage caused by a large storm—rather than addressing recurring flooding and erosion. Unlike the other NRCS programs and the Corps program, NRCS’s Conservation Technical Assistance Program does not require any cost benefit analysis to qualify for assistance. An NRCS official in Alaska estimated that during the last 2 years, NRCS provided assistance to about 25 villages under this program. The program is designed to provide technical assistance to communities and individuals that request help to solve natural resource problems, improve the health of the watershed, reduce erosion, improve air and water quality, or maintain or improve wetlands and habitat. The technical assistance provided can range from advice or consultation services to developing planning, design, and/or engineering documents. The program does not fund the construction or implementation of a project. In addition to the federal programs, the state of Alaska has programs to help address or respond to flooding and erosion problems of Alaska Native villages. These include: The Alaska Department of Transportation and Public Facilities, which funds work through its maintenance appropriations to protect village airstrips from erosion. The Alaska Department of Community and Economic Development, which has a floodplain management program that provides coordination and technical assistance to communities to help reduce public-and private-sector losses and damage from flooding and erosion. The Alaska Department of Environmental Conservation, which has a Village Safe Water Program that can pay to relocate water or sewage treatment facilities that are threatened by erosion. The Alaska Housing Financing Corporation, which has a program to provide loans or grants to persons in imminent danger of losing their homes. The Alaska Division of Emergency Services, which coordinates the response to emergencies resulting from flooding and erosion, as requested by local communities. Its mission is to lead, coordinate, and support the emergency management system, in order to protect lives and prevent the loss of property from all types of hazards. With authorization from the governor, the state Disaster Relief Fund can make up to $1 million (without legislative approval) available to communities recovering from a state declared disaster. More funding may be available, with legislative approval, for presidential disaster declarations, for which the state is obligated to pay a 25 percent funding match. In addition to these programs, the state legislature, through its appropriations, has funded erosion control structures including bulkheads and sea walls. According to state documents, between 1972 and 1991 the state spent over $40 million for erosion control statewide. Four of the nine villages we reviewed are in imminent danger from flooding and erosion and are making plans to relocate, while the remaining five are taking other actions. (See fig. 7.) Of the four villages relocating, Kivalina, Newtok, and Shishmaref are working with relevant federal agencies to locate suitable new sites, while Koyukuk is just beginning the relocation planning process. The cost of relocating these villages is expected to be high, although estimates currently exist only for Kivalina. Of the five villages not planning to relocate, Barrow, Kaktovik, Point Hope, and Unalakleet each have studies under way that target specific infrastructure that is vulnerable to flooding and erosion. The fifth village, Bethel, is repairing and extending an existing seawall to protect the village’s dock from river erosion. Table 5 summarizes the status of the nine villages’ efforts to respond to their specific flooding and erosion problems. During our review of the nine villages, we found instances where federal agencies had invested in infrastructure projects without knowledge of the villages’ plans to relocate. Four villages—Kivalina, Koyukuk, Newtok, and Shishmaref—are in imminent danger of flooding and eroding and are planning to relocate. (See table 5.) Kivalina and Shishmaref are located on barrier islands that are continuously shrinking due to chronic erosion. In Newtok, the Ninglick River is making its way ever closer to the village, with an average erosion rate of 90 feet per year, and is expected to erode the land under homes, schools, and businesses within 5 years. The fourth village, Koyukuk, is located near the confluence of the Yukon and Koyukuk Rivers and experiences chronic annual flooding. The village of Kivalina lies on a barrier island that is both overcrowded and shrinking from chronic erosion. Surrounded by the Chukchi Sea and the Kivalina Lagoon, the village has no further room for expansion. (See fig. 8.) A 1994 study by a private contractor found more than one instance of 16 people living together in a 900-square-foot home. Overcrowding and poor sanitation have led to an extremely high incidence of communicable diseases and other health problems in Kivalina. Chronic erosion on the lagoon side of the island and along its southeastern tip where the lagoon empties into the sea has further exacerbated overcrowding. Several homes along this side are currently in danger of falling into the lagoon. On the seaside of the island, fall storm surges create annual coastal flooding and beach erosion. Portions of the island have been breached before, and it is believed that the right combination of storm events could flood the entire village at any time. In 1990, the Corps placed sandbags around the southern tip of the island in an attempt to stem the erosion, but that proved to be only a temporary solution. Most recent efforts to respond to flooding and erosion have involved studying the feasibility of possible relocation sites. The villagers would like a site that is near their current location with access to the ocean so that they can continue to pursue their subsistence lifestyle. Much of the surrounding area, however, is low-lying wetlands or tundra. One of the main obstacles for selecting a site has been the requirement of a gravel pad for some of the sites under consideration. In those cases, several feet of gravel must be spread over the entire site, both to elevate the new village above the floodplain and to protect the fragile permafrost. However, gravel is not easily accessible and would have to be barged in. Similarly, the harsh, remote terrain and limited site access drive up other costs for materials and machinery. The Corps has estimated that the cost to relocate Kivalina could range from $100 million for design and construction of infrastructure (including a gravel pad) at one site and up to $400 million for just the cost of building a gravel pad at another site. As a result, the community is now considering whether to ask the Corps to evaluate completely new sites that would not require a gravel pad. Remaining on the island, however, is no longer a viable option for the community. Like Kivalina, the village of Shishmaref is located on a barrier island in the Chukchi Sea and experiences chronic erosion. During severe fall storms, as occurred in 1973, 1997, 2001, and 2002, the village has lost on average between 20 and 50 feet of land and up to 125 feet at one time. This loss is considerable for an island that is no wider than one-quarter mile (1,320 feet). After a severe storm in October 2002, stress cracks along the western seaside bluffs became evident. These cracks were 5 to 10 feet from the edge of the banks and indicated that the permafrost that holds the island together had been undermined by the storm. As the permafrost melts, the banks cave in. (See fig. 9.) Several homes located along these banks had to be relocated to prevent them from falling into the sea. After the 1997 fall storm, which was declared a state disaster, FEMA and state matching funds were used to help move 14 homes along the coastal bluff to another part of the village, and in 2002, the Bering Straits Housing Authority relocated an additional 5 homes out of harm’s way. Although the Corps had informed the villagers of Shishmaref in 1953 that relocation would be a cheaper alternative to building a seawall to protect the bluffs, the community did not vote to relocate until 1973 when it experienced two unusually severe fall storms that caused widespread damage and erosion. However, the site that the community selected proved to be unsuitable because it had an extensive layer of permafrost. Furthermore, other government agencies told the villagers that they would not receive funding for their new school or a much-needed new runway if they decided to relocate. According to Corps documents, the community reversed its decision and voted in August 1974 to stay on the island. The new school was completed in 1977, and a few years later a new runway was also built. Since the 1970s, the village has attempted a variety of erosion protection measures totaling more than $5 million. These projects have included various sandbag and gabion seawalls (wire cages, or baskets, filled with rocks) and even a concrete block mat. Each project has required numerous repairs and has ultimately failed to provide long-term protection. In October 2001, the governor of Alaska issued an administrative order for an $85,000 protective sandbag wall that was intended to last only one storm— and it did just that. In July 2002, the community again voted to relocate, and it is currently working with NRCS to select an appropriate site. Once a site is selected, the relocation process itself will take a number of years to complete. In the meantime, stopgap erosion protection measures and other federal and state services continue to be necessary to safeguard the community. For this reason, the community is working with Kawerak, a nonprofit Native corporation, to build a 500-foot seawall at an estimated cost of $1 million along the most affected part of the seaside bluff. The village is also seeking the Corps’ assistance to extend the wall farther to protect the school and other public buildings. In addition, the community is applying for assistance through the Alaska Army National Guard’s Innovative Readiness Training Program, in which guard units gain training and experience while providing medical, transportation, and engineering services to rural villages. The village of Newtok, located in the Yukon-Kuskokwim Delta on the Ninglick River, suffers from chronic erosion along its riverbank. Between 1954 and 2001 the village lost more than 4,000 feet of land to erosion. The current erosion rate has been estimated at 90 feet per year. At this rate, the Corps believes that the land under village residences and infrastructure will erode within 5 years. Among its various attempts to combat erosion, the village placed an experimental $750,000 sandbag wall along the riverbank in 1987. The wall, however, failed to slow the rate of erosion. The community recently negotiated a land exchange with the U.S. Fish and Wildlife Service for a new village site. Legislation authorizing the conveyance to Newtok of both the surface and subsurface estate of specified federal lands on nearby Nelson Island in exchange for land the village currently owns or would receive title to under ANCSA was signed into law in November 2003. In anticipation of a move, the village is studying the soils and geology of the proposed relocation site to determine its suitability. The fourth village planning to relocate is Koyukuk, which is located entirely in a floodplain near the confluence of the Yukon and Koyukuk rivers. It experiences severe flooding, mostly as a result of ice jams that occur after the spring breakup of river ice. (See fig. 10.) Water that accumulates behind the ice jams repeatedly floods homes and public structures, including the school and runway. The flooding is episodic, but villagers prepare for it every year in the spring by placing their belongings in high places and putting their vehicles on floats. The village has been evacuated more than once. In July 2003, with funding assistance from FEMA, the Tanana Chiefs Conference, which is a nonprofit regional corporation, developed a flood mitigation plan for Koyukuk that includes both evacuation and relocation strategies. The community is in the process of assessing prospective relocation areas to find an appropriate site. In the meantime, the FAA has awarded a grant to the state to both raise the grade of and lengthen Koyukuk’s runway at a cost of $10.3 million. The remaining five villages, while not in imminent danger, do experience serious flooding and erosion and are undertaking various infrastructure- specific activities to resolve these problems. Kaktovik is studying how best to address flooding of its airport runway. Point Hope is studying alternatives for an emergency evacuation road in the event of flooding. Barrow has a study under way for dealing with beachfront erosion that threatens the village’s utility corridor. Unalakleet is beginning a study to respond to erosion problems at its harbor and improve its navigational access. Finally, Bethel is repairing and extending an existing seawall to protect the village’s dock from river erosion. The village of Kaktovik, located on Barter Island at the northern edge of the Arctic National Wildlife Refuge, experiences flooding of its airport runway. The eastern end of the runway is approximately 1 to 2 feet above mean sea level, while the western end is approximately 7 to 8 feet above mean sea level. As a result of this low elevation, the runway usually floods every fall and is inoperative for 2 to 4 days, according to Kaktovik’s mayor. In 2000, the North Slope Borough, which operates the airport, contracted with the Arctic Slope Consulting Group, Inc., to conduct a flood study at the airport. The study presented a preliminary cost estimate of $11.3 million for protecting the runway from damage by storm events resulting in 100-year flood conditions. Recently, the North Slope Borough and FAA hired an engineering company to prepare an Airport Master Plan that will provide alternatives for upgrading the existing runway or building a new airport, either on Barter Island (estimated at $15 to $20 million) or on the mainland (estimated at $25 to $35 million). FAA will support the least-cost alternative and will fund 93.75 percent of the project, while the North Slope Borough will fund the remaining 6.25 percent. The study should be completed in 2004. The village of Point Hope, located on a spit of land that is one of the longest continually inhabited areas in northwest Alaska (with settlements over 2,500 years old), moved to its current location in the 1970s because of flooding and erosion problems at its original site. However, flooding and erosion remain a concern for the community at its new location, prompting efforts to build an evacuation road and relocate its runway. The North Slope Borough has funded a Project Analysis Report that assesses three construction options for an emergency evacuation road, which include reconstructing an existing road, extending that road to the mainland, or constructing a new road altogether. The road would not only facilitate emergency evacuation in the event of a flood, but would also provide a transportation route to a relocated runway. The village’s current runway, which is a mile west of the current village and extends to the Chukchi Sea, floods during fall storms and is at risk of erosion. According to village representatives, the runway was inoperable for 5 days last year because of flooding. (See fig. 11.) One end of the runway is currently about 80 feet from the ocean, and village officials estimate that between 5 to 8 feet of land are lost to erosion annually. They noted however, that a single storm could take as much as 20 feet of land. The Alaska Native village of Barrow is grappling with ways to address beach erosion and flooding. Much of the community’s infrastructure is at risk from storm damage, shoreline erosion, and flooding. About $500 million of Barrow’s infrastructure is located in the floodplain. In particular, the road that separates the sewage lagoon and an old landfill from the sea is at risk, as well as the village’s utility corridor. This underground corridor contains sewage, water and power lines, and communication facilities for the community. Beach erosion threatens over 1 mile of the corridor. According to village and North Slope Borough officials, the Borough coordinates erosion projects for the village and spends about $500,000 each time there is a flood. The Corps has recently begun a feasibility study for a storm damage reduction project along Barrow’s beach. The Alaska Native village of Unalakleet experiences both coastal and river flooding, which, when combined with shoreline erosion, have created an access problem at the harbor. Eroded land has piled up at the harbor mouth, creating six distinct sandbars. These sandbars pose a serious problem for barge passage; barges and fishing boats must wait for high tide to reach the harbor, delaying the delivery of bulk goods, fuel, and other items, which increases the costs of the cargo and moorage. The sandbars also pose a risk to those whose boats get stuck at low tide and who must simply sit and wait for a high tide. Unalakleet serves as a subregional hub for several nearby villages that rely on the harbor and fish processing plant for conducting their commercial fishing businesses. The village was recently able to raise $400,000 from the Norton Sound Economic Development Corporation and $400,000 from Alaska Department of Transportation and Public Facilities for the local share of a Corps study on improving navigational access to its harbor. Bethel, the regional village hub of the Yukon-Kuskokwim Delta, experiences periodic flooding, mostly because of ice jams during the spring breakup of the Kuskokwim River. The ice also causes severe erosion by scouring the riverbanks. The spring ice breakup in 1995 caused such severe erosion that the governor of Alaska declared a state of emergency—ice scour created a cove 350 feet long and 200 feet inland, endangering several structures and severely undercutting the city dock. The village’s main port is the only one on the western Alaska coast for oceangoing ships and serves as the supply center for over 50 villages in the Yukon-Kuskokwim Delta. In response to the 1995 emergency, the village placed rock along 600 linear feet of the riverbank and dock. This was the beginning of an 8,000-foot bank stabilization seawall that cost $24 million. Currently, the Corps has a project under way to repair this seawall by placing more rock and by replacing the steel tieback system and placing steel wale on the inland side of the pipe piles. The project will also extend the seawall 1,200 feet so that it protects the entrance to Bethel’s small boat harbor. The initial cost estimate for this project in 2001 was over $4.7 million, with average annual costs of $374,000. During our review of these villages, we found instances where federal agencies invested in infrastructure projects without knowledge of the villages’ plans to relocate. For example, the Denali Commission and the Department of Housing and Urban Development were unaware of Newtok’s relocation plans when they decided to jointly fund a new health clinic in the village for $1.1 million (using fiscal year 2002 and 2003 funds). During our site visit to Newtok, we observed that the new clinic’s building materials had already been delivered to the dock. Once it is constructed and the village is ready to relocate, moving a building the size of the new clinic across the river may be difficult and costly. Neither the Denali Commission nor the Department of Housing and Urban Development realized that the plans for Newtok’s relocation were moving forward, even though legislation for completing a land exchange deal with the U.S. Fish and Wildlife Service was first introduced in March 2002. Similarly, in Koyukuk, the FAA was initially unaware of the village’s relocation plans when it solicited bids for a $10.3 million state project to increase the grade of and lengthen the village’s existing runway, according to FAA officials. When we further discussed this with FAA officials, however, they noted that it is the state of Alaska that prioritizes and selects the transportation projects that receive FAA grants. According to these FAA officials, who awarded the grant for Koyukuk’s runway, state transportation officials were aware of the village’s decision to relocate. Although we recognize that development and maintenance of critical infrastructure, such as health clinics and runways, are necessary as villages find ways to address flooding and erosion, we question whether limited federal funds for these projects are being expended in the most effective and efficient manner possible. The Denali Commission, cognizant of the stated purpose of its authorizing act to deliver services in a cost-effective manner, has developed a draft investment policy intended to guide the process of project selection and ensure prudent investment of federal funds. The draft policy provides guidance for designers to tailor facilities based on six primary investment indicators: size of community and population trends, imminent environmental threats, proximity/access to existing services and/or facilities, per capita investment benchmarks, unit construction costs, and economic potential. These indicators provide the Denali Commission and its partners with an investment framework that will guide selection and funding for sustainable projects. Flooding and erosion issues fall under the “imminent environmental threats” indicator. The commission has applied this draft policy to Shishmaref, which requested a new clinic at its current location. Given that the village is in the process of relocating, the commission awarded $150,000 to repair the existing clinic in Shishmaref in lieu of building a new clinic. In addition, the Denali Commission recognizes that systematic planning and coordination on a local, regional, and statewide basis are necessary to achieve the most effective results from investments in infrastructure, economic development, and training, and has signed a memorandum of understanding with 31 federal and state agencies to achieve this goal. This memorandum of understanding could serve as a vehicle by which other federal agencies would follow the lead of the Denali Commission regarding decisions to invest in infrastructure for communities threatened by flooding and erosion. The unique circumstances of Alaska Native villages and their inability to qualify for assistance under a variety of federal flooding and erosion programs may require special measures to ensure that the villages receive certain needed services. Alaska Native villages, which are predominately remote and small, often face barriers not commonly found in other areas of the United States, such as harsh climate, limited access and infrastructure, high fuel and shipping prices, short construction seasons, and ice-rich permafrost soils. In addition, many of the federal programs to prevent and control flooding and erosion are not a good fit for the Alaska Native villages because of the requirement that economic costs of the project not exceed the economic benefits. Federal and Alaska state officials and Alaska Native village representatives that we spoke with identified several alternatives for Congress that could help mitigate the barriers that villages face in obtaining federal services. These alternatives include (1) expanding the role of the Denali Commission to include responsibilities for managing a flooding and erosion assistance program, (2) directing the Corps and NRCS to include social and environmental factors in their cost/benefit analyses for projects requested by Alaska Native villages, and (3) waiving the federal cost-sharing requirement for flooding and erosion projects for Alaska Native villages. In addition, GAO identified a fourth alternative—authorizing the bundling of funds from various agencies to address flooding and erosion problems in these villages. Each of these alternatives has the potential to increase the level of federal services provided to Alaska Native villages and can be considered individually or in any combination. However, adopting some of these alternatives will require consideration of a number of important factors, including the potential to set a precedent for other communities and programs as well as resulting budgetary implications. While we did not determine the cost or the national policy implications associated with any of the alternatives, these are important considerations when determining appropriate federal action. Congress may want to consider expanding the role of the Denali Commission by directing that federal funding for flooding and erosion studies and projects in Alaska Native villages go through the commission. Currently, the Denali Commission does not have explicit responsibility for flooding and erosion programs. This alternative would authorize the Denali Commission to establish a program that conducts studies and constructs projects to mitigate flooding and control erosion in Alaska Native villages that would otherwise not qualify under Corps and NRCS flooding and erosion programs. The commission could set priorities for its studies and projects and respond to the problems of those villages most in need, and it could enter into a memorandum of agreement with the Corps or other related agencies to carry out these studies and projects. One of the factors to consider in adopting this alternative is that additional funding may be required. This alternative is similar to the current proposal in S. 295 that would expand the role of the Denali Commission to include a transportation function. S. 295 would authorize the commission to construct marine connections (such as connecting small docks, boat ramps, and port facilities) and other transportation access infrastructure for communities that would otherwise lack access to the National Highway System. Under the bill, the commission would designate the location of the transportation project and set priorities for constructing segments of the system. A second alternative is for Congress to direct the Corps and NRCS to include social and environmental factors in its cost/benefit analysis for flooding and erosion projects for Alaska Native villages. Under this alternative, the Corps would not only consider social and environmental factors, but would also incorporate them into its cost/benefit analysis. Similarly, NRCS for its Watershed Protection and Flood Prevention Program would also incorporate social and environmental factors into its cost/benefit analysis. To capture these factors even when they cannot be easily quantified, the Corps and NRCS may have to consider these factors explicitly. Several Alaska Native entities have raised this issue with the Corps and the Alaska congressional delegation. For example, the Native village of Unalakleet has led efforts to have the Corps revise its cost/benefit analysis. As part of these efforts, the village has worked with state and federal agencies; the Alaska Federation of Natives, which represents Native corporations statewide; and the Alaska congressional delegation. One implication of adopting this alternative for Alaska Native villages may be that it could set a precedent for flooding and erosion control projects in other communities. This alternative is intended to benefit small and remote villages that often fail to qualify for assistance because the cost of the study or project exceeds the benefits. The number of villages that may be able to qualify for a study or project under this alternative will depend on the extent to which the Corps and NRCS incorporate social and environmental factors into their calculations. However, if more villages qualify for projects under this approach, the increase could have an impact on the amount of funds and resources that the Corps and NRCS have available for these efforts. Congress is currently considering a bill that would direct the Corps to approve certain projects that do not necessarily meet the cost/benefit requirement. In H.R. 2557, the Corps would be authorized to provide assistance to communities with remote and subsistence harbors that meet certain criteria. In particular, for studies of harbor and navigational improvements, the Secretary of the Army could recommend a project without the need to demonstrate that it is justified solely by net national economic development benefits, if the Secretary determines that, among other considerations, (1) the community to be served by the project is at least 70 miles from the nearest surface-accessible commercial port and has no direct rail or highway link to another community served by a surface- accessible port or harbor or is in Puerto Rico, Guam, Northern Mariana Islands, or American Samoa; (2) the harbor is economically critical such that over 80 percent of the goods transported through the harbor would be consumed within the community; and (3) the long-term viability of the community would be threatened without the harbor and navigation improvement. These criteria would apply to many remote and subsistence harbors in Alaska Native villages. A third alternative is to waive the federal cost-sharing requirement for flooding and erosion projects for Alaska Native villages. As required by law, the Corps currently imposes a cost-share of between 25 and 50 percent of project planning and construction costs. These sums, which are generally in the hundreds of thousands of dollars, are difficult for villages to generate. This difficulty has been one of the more common criticisms of the Corps’ program. For example, the village of Unalakleet had difficulty obtaining funding for its local cost-share requirement for a project. Adopting this alternative for Alaska Native villages would require an assessment of several factors, including setting a precedent for other flooding and erosion control projects in other communities as well as budgetary implications. In H.R. 2557, Congress is considering waiving the cost-sharing provisions for studies and projects in certain areas. In this bill, the Secretary of the Army would be required to waive up to $500,000 of the local cost-sharing requirements for all studies and projects in several locations, including land in the state of Alaska conveyed to Alaska Native Village Corporations. Congress could also consider authorizing the bundling of funds from various agencies to respond to flooding and erosion in Alaska Native villages. Under this alternative, Alaska Native villages could consolidate and integrate funding from flooding and erosion programs from various federal agencies, such as the Bureau of Indian Affairs and the Department of Housing and Urban Development, to conduct an erosion study or to help fund the local cost share of a Corps project. Doing so would potentially allow Alaska Native villages to use available federal assistance for flooding and erosion more effectively and efficiently. By law, Indian tribal governments are currently allowed to integrate their federally funded employment, training, and related services programs from various agencies into a single, coordinated, comprehensive program that reduces administrative costs by consolidating administrative functions. Many Alaska Native villages participate in this program. Several bills have been introduced to authorize tribal governments also to bundle federal funding for economic development programs and for alcohol and substance abuse programs. For example, in the 106th, 107th, and 108th sessions of Congress, bills were introduced to authorize the integration and coordination of federal funding for community, business, and economic development of Native American communities. Under these bills, tribal governments or their agencies may identify federal assistance programs to be integrated for the purpose of supporting economic development projects. Similarly, in the 107th and 108th Congresses, S. 210 and S. 285 were introduced to authorize, respectively, the integration and consolidation of alcohol and substance abuse programs and services provided by tribal governments. Alaska Native villages that are not making plans to relocate, but are severely affected by flooding and erosion, must find ways to respond to these problems. However, many of these villages have difficulty finding assistance under several federal programs, largely because the economic costs of the proposed project to control flooding and erosion exceed the expected economic benefits. As a result, many private homes and other infrastructure continue to be threatened and are in danger from flooding and erosion. In addition, many Alaska Native villages that are small, remote, and have a subsistence lifestyle, lack the resources to help them respond to flooding and erosion. Given the unique circumstances of Alaska Native villages, special measures may be required to ensure that these communities receive assistance in responding to flooding and erosion. Alaska Native villages that cannot be protected from flooding and erosion through engineering structures and must relocate face a particularly daunting challenge. These villages are working with federal and state agencies to find ways to address this challenge. Any potential solution, however, whether a single erosion protection project or full relocation, goes through stages of planning and execution that can take years to complete. In the interim, investment decisions must be made regarding delivery of services such as building new structures or renovating and upgrading existing structures. Such decisions for villages should be made in light of the status of their efforts to address flooding and erosion. We identified a number of instances where projects were approved and designed without considering a village’s relocation plans. Investing in infrastructure that cannot be easily moved or may be costly to move may not be the best use of limited federal funds. It is encouraging that the Denali Commission is working on a policy to ensure that investments are made in a conscientious and sustainable manner for villages threatened by flooding and erosion. Successful implementation of such a policy will depend in part on its adoption by individual federal agencies that also fund infrastructure development in Alaska Native villages. In order to ensure that federal funds are expended in the most effective and efficient manner possible, we recommend that the federal cochairperson of the Denali Commission, in conjunction with the state of Alaska cochairperson, adopt a policy to guide future investment decisions and project designs in Alaska Native villages affected by flooding and erosion. The policy should ensure that (1) the Commission is aware of villages’ efforts to address flooding and erosion and (2) projects are designed appropriately in light of a village’s plans to address its flooding and erosion problems. Determining the appropriate level of service for Alaska Native villages is a policy decision that rests with Congress. We present four alternatives that Congress may wish to consider as it deliberates over how, and to what extent, federal programs could better respond to flooding and erosion in Alaska Native villages. In any such decision, two factors that would be important to consider are the cost and the national policy implications of implementing any alternative or combination of alternatives. If Congress would like to provide additional federal assistance to Alaska Native villages, it may wish to consider directing relevant executive agencies and the Denali Commission to assess the cost and policy implications of implementing the alternatives that we have identified or others that may be appropriate. We provided copies of our draft report to the Departments of Agriculture, Defense, Health and Human Services, Housing and Urban Development, the Interior, and Transportation; the Denali Commission; and the state of Alaska. The Departments of Defense, Housing and Urban Development, and the Interior, as well as the Denali Commission and the state of Alaska, provided official written comments. (See appendixes IV through VIII, respectively, for the full text of the comments received from these agencies and our responses.) The comments were generally technical in nature with few comments on the report’s overall findings, recommendation, and alternatives. The Departments of Health and Human Services and Transportation provided informal technical comments, and the Department of Agriculture had no comments on the report. We made changes to the draft report, where appropriate, based on the technical comments provided by the seven entities that commented on the draft report. The Denali Commission was the only entity to comment on our recommendation that the commission adopt an investment policy. The commission agreed with the recommendation and noted that such a policy should help avoid flawed decision making in the future. Furthermore, the commission commented that it was not sufficient for it alone to have an investment policy, but believed that all funding agencies should use a similar policy to guide investments. We acknowledge the commission’s concerns that other funding agencies should also make sound investment decisions. As noted in our report, the Denali Commission has signed a memorandum of understanding with 31 federal and state agencies with the goal of systematic planning and coordination for investments in infrastructure, economic development, and training, and we believe that this memorandum could serve as a vehicle by which other federal agencies would follow the lead of the commission regarding decisions to invest in communities. Of the four alternatives presented in the report, the alternative to funnel funding for flooding and erosion projects through the Denali Commission received the most comments. The Denali Commission, the U.S. Army (commenting on behalf of the Department of Defense), and the Department of Housing and Urban Development all raised some concerns about this alternative. The Denali Commission commented that it is not convinced that expanding its role to include responsibilities for managing a flooding and erosion program is the appropriate response. The Army commented that the alternative to expand the role of the Denali Commission to mange a flooding and erosion program might exceed the capabilities of the organization. Lastly, the Department of Housing and Urban Development commented that the Denali Commission, as an independent agency, does not have the capacity to be fully integrated with the efforts of federal agencies to address this issue. Moreover, while each of these entities recognized the need for improved coordination of federal efforts to address flooding and erosion in Alaska Native villages, none of them provided any specific suggestions on how or by whom this should be accomplished. As discussed in our report, the Denali Commission currently does not have the authority to manage a flooding and erosion program, and should Congress choose this alternative, the commission would need to develop such a program. Consequently, we still believe that expanding the role of the commission continues to be a possible option for helping to mitigate the barriers that villages face in obtaining federal services. We are sending copies of this report to the Secretaries of Agriculture, the Army, Health and Human Services, Housing and Urban Development, the Interior, and Transportation, as well as to the federal and state cochairs of the Denali Commission, the Governor of the state of Alaska, appropriate congressional committees, and other interested Members of Congress. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix IX. The fiscal year 2003 Conference Report for the military construction appropriation bill directed GAO to study Alaska Native villages affected by flooding and erosion. In response to this direction and subsequent discussions with committee staff, we (1) determined the extent to which Alaska Native villages are affected by flooding and erosion; (2) identified federal and Alaska state programs available to respond to flooding and erosion and assessed the extent to which federal assistance has been provided to Alaska Native villages; (3) determined the status of efforts, including cost estimates, to respond to flooding and erosion in the villages of Barrow, Bethel, Kaktovik, Kivalina, Koyukuk, Newtok, Point Hope, Shishmaref, and Unalakleet; and (4) identified alternatives that Congress may wish to consider when providing assistance for flooding and erosion of Alaska Native villages. In addition, during the course of our work we became concerned about the possible inefficient use of federal funds for building infrastructure in villages that were planning to relocate. As a result, we are including information regarding these concerns in this report. To determine which Alaska Native villages are affected by flooding and erosion, we reviewed Alaska and federal agency reports and databases that contained information on flooding and erosion. We interviewed officials from Alaska and federal agencies, such as the Alaska Division of Emergency Services, the Alaska Department of Community and Economic Development, the U.S. Army Corps of Engineers, and the U.S. Department of Agriculture’s Natural Resources Conservation Service, who are involved in addressing flooding and erosion problems. We also interviewed Alaska Native officials from the selected villages, as well as officials from Native village and regional corporations, such as Tikigaq, the Association of Village Council Presidents, and Kawarek. For the purposes of this report we defined an Alaska Native village as a village that (1) was deemed eligible as a Native village under the Alaska Native Claims Settlement Act and (2) has a corresponding Alaska Native entity that is recognized by the Department of the Interior’s Bureau of Indian Affairs. We identified federal flooding and erosion programs by searching the Catalog of Federal Domestic Assistance and by using other information. We reviewed applicable federal laws and regulations for these programs. We also reviewed program file records and interviewed federal program officials to determine the extent to which Alaska Native villages have been provided federal assistance. In addition, to determine the Alaska state programs that are available to villages for addressing flooding and erosion, we interviewed appropriate state officials from the Alaska Department of Transportation and Public Facilities, the Division of Emergency Services, and the Department of Community and Economic Development. We also discussed these programs and the assistance provided with selected village representatives. While the committee directed us to include six villages, we added three more—Koyukuk, Newtok, and Shishmaref—based on discussions with congressional staff and with federal and Alaska state officials familiar with flooding and erosion problems. To determine the status of efforts, including cost estimates, to address flooding and erosion at these nine selected villages, we reviewed federal and state databases and studies. We also reviewed analyses performed by the Corps and by other federal, state, and local agencies. We visited only four villages—Bethel, Kivalina, Newtok, and Shishmaref—due to the high cost of travel in Alaska. We selected three of the four villages to visit that were in imminent danger (we visited Bethel because in order to reach Newtok we had to go through Bethel). We interviewed village representatives from each of the nine villages. We also interviewed state and federal officials involved in the efforts to address flooding and erosion for each of the nine villages. We identified and evaluated Corps studies that addressed these problems with particular attention to cost estimates. We also assessed the nature and applicability of these cost studies. To determine what alternatives Congress may wish to consider in responding to flooding and erosion of Alaska Native villages, we interviewed local, state, and federal officials, officials from the Alaska Federation of Natives, and Kawarek representatives. During these interviews, we asked people to identify alternatives that they believed would address impediments to the delivery of flooding and erosion services. We also obtained and reviewed prior congressional bills that addressed Alaska Native issues. We conducted our review from February 2003 through October 2003 in accordance with generally accepted government auditing standards. Table 6 shows the list of the 13 regional corporations and the corresponding nonprofit arms. These nonprofit organizations provide social services to Alaska Native villages and also help Alaska Natives respond to problems, including those dealing with flooding and erosion. Cheesh-Na Tribe (formerly the Native Village of Chistochina) Agdaagux Tribe of King Cove Native Village of Atka Native Village of BelkofskiNative Village of False Pass Native Village of Nelson Lagoon Native Village of Nikolski Pauloff Harbor VillageSaint George Island (see Pribilof Islands Aleut Communities of St. Paul and St. George Islands) Saint Paul Island (see Pribilof Islands Aleut Communities of St. Paul and St. George Islands) Qagan Tayagungin Tribe of Sand Point Village Qawalangin Tribe of Unalaska Native Village of UngaNative Village of Barrow Inupiat Traditional Government (formerly Native Village of Barrow) Kaktovik Village (aka Barter Island) Native Village of Nuiqsut (aka Nooiksut) Native Village of Point Hope Native Village of Point Lay Native Village of Brevig Mission Chinik Eskimo Community (Golovin) Native Village of Diomede (aka Inalik) King Island Native CommunityNative Village of Saint Michael Native Village of White Mountain Native Village of Chignik Lagoon Village of Clark’s Point Curyung Tribal Council (formerly Native Village of Dillingham) New Koliganek Village Council (formerly Koliganek Village) Native Village of Perryville Native Village of Pilot Point Native Village of Port Heiden Portage Creek Village (aka Ohgsenakale) Algaaciq Native Village (St. Mary's) Asa'carsarmiut Tribe (formerly Native Village of Mountain Village) Native Village of Chuathbaluk (Russian Mission, Kuskokwim) Native Village of Goodnews Bay Native Village of Hooper Bay Iqurmuit Traditional Council (formerly Native Village of Russian Mission) Native Village of Kwinhagak (aka Quinhagak) Lime Village Native Village of Marshall (aka Fortuna Ledge) Nunakauyarmiut Tribe (formerly Native Village of Toksook Bay) Orutsararmuit Native Village (aka Bethel) Native Village of Pitka's Point Native Village of Scammon Bay Native Village of Chanega (aka Chenega) Native Village of Eyak (Cordova) Native Village of Nanwalek (aka English Bay) Alatna Village Birch Creek Tribe (formerly listed as Birch Creek Village) Evansville Village (aka Bettles Field) Native Village of Fort Yukon Galena Village (aka Louden Village) Organized Village of Grayling (aka Holikachuk) Koniag Village of AfognakNative Village of Akhiok Native Village of Larsen Bay Chilkat Indian Village (Klukwan) The Army commented on our alternative to expand the role of the Denali Commission, which is discussed in the Agency Comments and Our Evaluation section of this report. We also modified the report on the basis of the technical comments that the Army gave us, as appropriate. In addition, discussed below are GAO’s corresponding detailed responses to some of the Army’s comments. 1. We disagree with the Corps’ statement that the Flood Control Act of 1936 requires benefits to exceed costs for flood control projects. The pertinent provision of the act states that “it is the sense of Congress that . . . the Federal Government should improve or participate in the improvement of navigable waters or their tributaries . . . if the benefits . . . are in excess of the estimated costs.” 33 U.S.C. § 701a. This provision, while setting out a statement of Congressional policy, does not establish a legal requirement that benefits exceed costs, nor does it prohibit carrying out a project where costs exceed benefits. We have included a reference to this provision in the report’s discussion of the Corps’ guidelines for evaluating water resource projects. 2. We agree that it is not realistic for a village to go without a health clinic for 10 years. Our report states that development and maintenance of critical infrastructure, such as health clinics and runways, is necessary as villages find ways to address flooding and erosion. However, given limited federal funds, agencies must explore potentially less costly options for meeting a village’s needs until it is able to relocate. 3. As noted in our report, if Congress decides to provide additional federal assistance to Alaska Native villages, it may wish to consider directing relevant executive agencies as well as the Denali Commission to assess the cost and policy implications of implementing the alternatives. 4. The names for the Alaska Native entities used in appendix III of this report are from the official list of federally recognized Indian entities published by the Department of the Interior in the Federal Register (see 67 Fed. Reg. 46328, July 12, 2002). The Denali Commission commented on our recommendation and the alternative to expand its role, both of which are discussed in the Agency Comments and Our Evaluation section of this report. In addition, discussed below are GAO’s corresponding detailed responses to some of the Denali Commission’s general comments. 1. We agree that the Corps can determine whether preventing or minimizing flooding and erosion is technically and financially feasible. Under the Tribal Partnership Program, authorized by section 203 of the Water Resources Development Act of 2000 (Pub. L. No. 106-541, 114 Stat. 2572, 2588-2589 (2000)), the Corps is currently examining impacts of coastal erosion due to continued climate change and other factors in the Alaska Native villages of Bethel, Dillingham, Shishmaref, Kaktovik, Kivalina, Unalakleet and Newtok. Congress provided $2 million for these activities in fiscal year 2003. However, other federal agencies, such as the NRCS, also have the ability to conduct feasibility analyses. 2. We acknowledge the commission’s desire for a larger role for Alaska state and local governments in developing and executing response strategies and in helping to prioritize the use of scarce resources. However, whether or not the state and local governments choose to expend their own resources to become more involved in responding to flooding and erosion issues is entirely a state or local government decision. Since this decision would involve the expenditure of state or local government funds, rather than federal funds, it is outside the scope of our report. The state of Alaska provided technical comments from the Division of Emergency Services and the Department of Community and Economic Development, which we incorporated as appropriate. In addition, discussed below are GAO’s corresponding detailed responses to some of the state’s comments. 1. The fiscal year 2003 Conference Report for the military construction appropriation bill directed GAO to study at least six Alaska Native villages affected by flooding and erosion—Barrow, Bethel, Kaktovik, Kivalina, Point Hope, and Unalakleet—we added three more— Koyukuk, Newtok, and Shishmaref—based on discussions with congressional staff and with federal and Alaska state officials familiar with flooding and erosion problems. As our report states, four of the nine villages, Kivalina, Koyukuk, Newtok and Shishmaref are in imminent danger from flooding and erosion. We agree that the remaining five villages may not be the most at risk from flooding and erosion. 2. It is not our intent to expand the role of the Denali Commission to include a disaster response and recovery component. In addition to those named above, José Alfredo Gómez, Judith Williams, and Ned Woodward made key contributions to this report. Also contributing to the report were Mark Bondo, John Delicath, Chase Huntley, Marmar Nadji, Cynthia Norris, and Amy Webbink. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Approximately 6,600 miles of Alaska's coastline and many of the low-lying areas along the state's rivers are subject to severe flooding and erosion. Most of Alaska's Native villages are located on the coast or on riverbanks. In addition to the many federal and Alaska state agencies that respond to flooding and erosion, Congress established the Denali Commission in 1998 to, among other things, provide economic development services and to meet infrastructure needs in rural Alaska communities. Congress directed GAO to study Alaska Native villages affected by flooding and erosion and to 1) determine the extent to which these villages are affected, 2) identify federal and state flooding and erosion programs, 3) determine the current status of efforts to respond to flooding and erosion in nine villages, and 4) identify alternatives that Congress may wish to consider when providing assistance for flooding and erosion. Flooding and erosion affects 184 out of 213, or 86 percent, of Alaska Native villages to some extent. While many of the problems are long-standing, various studies indicate that coastal villages are becoming more susceptible to flooding and erosion due in part to rising temperatures. The Corps of Engineers and the Natural Resources Conservation Service administer key programs for constructing flooding and erosion control projects. However, small and remote Alaska Native villages often fail to qualify for assistance under these programs--largely because of agency requirements that the expected costs of the project not exceed its benefits. Even villages that do meet the cost/benefit criteria may still not receive assistance if they cannot meet the cost-share requirement for the project. Of the nine villages we were directed to review, four--Kivalina, Koyukuk, Newtok, and Shishmaref--are in imminent danger from flooding and erosion and are planning to relocate, while the remaining five are in various stages of responding to these problems. Costs for relocating are expected to be high. For example, the cost estimates for relocating Kivalina range from $100 million to over $400 million. Relocation is a daunting process that may take several years to accomplish. During that process, federal agencies must make wise investment decisions, yet GAO found instances where federal agencies invested in infrastructure at the villages' existing sites without knowledge of their plans to relocate. GAO, federal and state officials, and village representatives identified some alternatives that could increase service delivery for Alaska Native villages, although many important factors must first be considered: (1) expand the role of the Denali Commission; (2) direct federal agencies to consider social and environmental factors in their cost/benefit analyses; (3) waive the federal cost-sharing requirement for these projects, and (4) authorize the "bundling" of funds from various federal agencies. |
CMS reported total obligations for CMS contracts in fiscal year 2008 were $3.6 billion. This amount includes obligations against contracts that process Medicare claims as well as obligations to other contractors such as those that operate the 1-800 Medicare help line, provide program management and consulting services, and support information technology. The $3.6 billion obligated in 2008 represents a 71 percent increase since 1998, when $2.1 billion was obligated. Since 1998, obligations to fiscal intermediaries, carriers, and Medicare Administrative Contractors (contractors that primarily process Medicare claims) have decreased approximately 16 percent. Obligations for other- than-claims processing activities, such as the 1-800 help line, information technology and financial management initiatives, and program management and consulting services, have increased 466 percent, as shown in figure 1. These trends may be explained in part by recent changes to the Medicare program, including the movement of functions, such as the help line, data centers, and certain financial management activities, from the fiscal intermediaries and carriers to specialized contractors. These specialized contractors, such as beneficiary contact center contractors and enterprise data center contractors, are categorized below as other-than-claims processing contractors. MMA required CMS to transition its Medicare claims processing contracts, which generally did not follow the FAR, to the FAR environment through the award of contracts to Medicare Administrative Contractors. CMS projected that the transition, referred to as Medicare contracting reform, would produce administrative cost savings due to the effects of competition and contract consolidation as well as produce Medicare trust fund savings due to a reduction in the amount of improper benefit payments. Additionally, the transition would subject millions of dollars of CMS acquisitions to the rules, standards, and requirements for the award, administration, and termination of government contracts in the FAR. Obligations to the new Medicare Administrative Contractors were first made in fiscal year 2007. CMS is required to complete Medicare contracting reform by 2011. As of September 1, 2009, 19 contracts have been awarded to Medicare Administrative Contractors, totaling about $1 billion in obligations to date. Except for certain Medicare claims processing contracts, CMS contracts are generally required to be awarded and administered in accordance with general government procurement laws and regulations such as the FAR; the Health and Human Services Acquisition Regulations (HHSAR); the Cost Accounting Standards (CAS); and the terms of the contract. At CMS, OAGM manages contracting activities and is responsible for, among other things, (1) developing policy and procedures for use by acquisition staff; (2) coordinating and conducting acquisition training; and (3) negotiation, award, administration, and termination of contracts. Multiple key players work together to monitor different aspects of contractor performance and execute pre-award and post-award contract oversight. All but one of the key roles described below are managed centrally in OAGM. The last, project officers, are assigned from CMS program offices. Contracting officers are responsible for ensuring performance of all necessary actions for effective contracting, overseeing contractor compliance with the terms of the contract, and safeguarding the interests of the government in its contractual relationships. The contracting officer is authorized to enter into, modify, and terminate contracts. According to OAGM’s invoice review policy, contracting officers, with the assistance of the contract specialists, review contractor invoices for compliance with contract terms, among other things. Contract specialists represent and assist the contracting officers with the contract, but are generally not authorized to commit or bind the government. The contract specialist assists with the invoice review process. The cost/price team serves as an in-house consultant to others involved in the contracting process at CMS. By request, the team, which consists of four contract auditors, provides support for contract administration including reviewing cost proposals, consultations about the allowability of costs billed on invoices, and assistance during contract closeout. Project officers serve as the contracting officer’s technical representative designated to monitor the contractor’s progress, including the surveillance and assessment of performance and compliance with project objectives. According to OAGM invoice review policy, project officers review certain invoice elements, such as labor and direct costs, and are required to certify whether the invoice is approved for payment by signing a Payments and Progress Certification Form. They may also conduct periodic analyses of contractor performance and cost data. CMS utilizes two different databases of acquisition information for a variety of internal and external reporting on its acquisition activities. The PRISM database contains basic information such as contract number, vendor name, and amount obligated. PRISM is used to develop contract documents and for internal reporting and acquisition planning. The Enhanced Departmental Contracts Information System (DCIS) is an HHS database that is used for department-level acquisition management and to satisfy external reporting requirements. DCIS collects and forwards information to the Federal Procurement Data System Next Generation (FPDS-NG), which is a publicly available database of governmentwide acquisition information. Likewise, FPDS-NG feeds into www.usaspending.gov, a Web site created in response to the Federal Funding Accountability and Transparency Act of 2006, which required a single searchable Web site, accessible by the public for free, that reports key information for each federal award. The contract life cycle includes many acquisition and administrative activities. Prior to award, an agency identifies a need; develops a requirements package; determines the method of contracting; solicits and evaluates bids or proposals; and ultimately awards a contract. After contract award, the agency performs contract administration and contract closeout. Contract administration involves monitoring the contractor’s performance as well as reviewing and approving (or disapproving) the contractor’s request for payment. Other tasks may include audits or reviews of the contractor’s costs and compliance with CAS. The contract closeout process involves verifying that the goods or services were provided and that administrative matters are completed, including a contract audit of costs billed to the government and adjusting for any over- or underpayments based on the final invoice. Agencies may choose among different contract types to acquire goods and services. This choice is the principal means that agencies have for allocating risk between the government and the contractor. Contract types can be grouped into three broad categories: fixed price contracts, cost reimbursement contracts, and time and materials (T&M) contracts. Although the FAR places limitations on the use of cost reimbursement and T&M contract types, these contract types may be used to provide the flexibility needed by the government to acquire the large variety and volume of supplies and services it needs. As discussed below, these three types of contracts place different levels of risk on the government and the contractor. Generally, the government manages its risk, in part, through oversight activities. For fixed price contracts, the government agrees to pay a set price for goods or services regardless of the actual cost to the contractor. A fixed price contract is ordinarily in the government’s interest when the risk involved is minimal or can be predicted with an acceptable degree of certainty and a sound basis for pricing exists, as the contractor assumes the risk for cost overruns. Under cost reimbursement contracts, the government agrees to pay those costs of the contractor that are allowable, reasonable, and allocable to the extent prescribed by the contract. The government assumes most of the cost risk because the contractor is only required to provide its best effort to meet contract objectives within the estimated cost. If this cannot be done, the government can provide additional funds to complete the effort, decide not to provide additional funds, or terminate the contract. Cost reimbursement contracts may be used only when the contractor’s accounting system is adequate for determining costs applicable to the contract and appropriate government surveillance during performance will provide reasonable assurance that efficient methods and effective cost controls are used. In order to determine if the contractor has efficient methods and effective cost controls, contracting officers and other contracting oversight personnel may perform a comprehensive review of contractor invoices to determine if the contractor is billing costs in accordance with the contract terms and applicable government regulations. In addition, the establishment of provisional and final indirect cost rates helps to ensure that the government makes payments for costs that are allowable, reasonable, and allocable to the extent prescribed by the contract. For T&M contracts, the government agrees to pay fixed per-hour labor rates and to reimburse other costs directly related to the contract, such as materials, equipment, or travel, based on cost. Like cost reimbursement contracts, the government assumes the cost risk because the contractor is only required to make a good faith effort to meet the government’s needs within a ceiling price. A T&M contract may be used only if the contracting officer prepares a determination and findings that no other contract type is suitable and if the contract includes a ceiling price that the contractor exceeds at its own risk. In addition, since these contracts provide no positive profit incentive for the contractor to control costs or use labor efficiently, the government must conduct appropriate surveillance of contractor performance to ensure efficient methods and effective cost controls are being used. The FAR defines cognizant federal agency (CFA) as the agency responsible for establishing forward pricing rates, final indirect cost rates (when not accomplished by a designated contract auditor), and administering cost accounting standards for all contracts in a business unit. Generally, the CFA is the agency with the largest dollar amount of negotiated contracts, including options, with the contractor. In addition, the CFA may be responsible for establishing provisional indirect cost rates (also known as “billing rates”) based on recent reviews, previous rate audits, experience, or similar reliable data to ensure that estimates are as close as possible to final indirect cost rates anticipated. The Standards for Internal Control in the Federal Government provide the overall framework for establishing and maintaining internal control and for identifying and addressing areas at greatest risk of fraud, waste, abuse, and mismanagement. These standards provide that—to be effective—an entity’s management should establish both a supportive overall control environment and specific control activities directed at carrying out its objectives. As such, an entity’s management should establish and maintain an environment that sets a positive and supportive attitude towards control and conscientious management. A positive control environment provides discipline and structure as well as a climate supportive of quality internal control, and includes an assessment of the risks the agency faces from both external and internal sources. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives and help ensure that actions are taken to address risks. The standards further provide that information should be recorded and communicated to management and oversight officials in a form and within a time frame that enables them to carry out their responsibilities. Finally, an entity should have internal control monitoring activities in place to assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. Control activities include both preventive and detective controls. Preventive controls—such as invoice review prior to payment—are controls designed to prevent errors, improper payments, or waste, while detective controls—such as incurred cost audits—are designed to identify errors or improper payments after the payment is made. A sound system of internal control contains a balance of both preventive and detective controls that is appropriate for the agency’s operations. While detective controls are beneficial in that they identify funds that may have been inappropriately paid and should be returned to the government, preventive controls such as accounting system reviews and invoice reviews help to reduce the risk of improper payments or waste before they occur. A key concept introduced in our standards is that control activities selected for implementation be cost beneficial. Generally it is more effective and efficient to prevent improper payments. A control activity can be preventive, detective, or both based on when the control occurs in the contract life cycle. We found pervasive deficiencies in internal control over contracting and payments to contractors. The internal control deficiencies occurred throughout the contracting process, that is both pre- and post-award, and increase the risk of improper payments or waste. These deficiencies were due in part to a lack of agency-specific policies and procedures to ensure that FAR requirements and other control objectives were met. CMS also did not take appropriate steps to ensure that existing policies were properly implemented nor maintain adequate documentation in its contract files. Further, the Contract Review Board was not effective in ensuring proper contract award actions. These internal control deficiencies are a manifestation of CMS’s weak overall control environment, which is discussed later. As a result of our work, we estimate that at least 84.3 percent of FAR- based contract actions made by CMS in fiscal year 2008 contained at least one instance in which a key control was not adequately implemented. (See table 3 in app. I for a list of the 11 controls we tested, which ranged from ensuring contractors had adequate accounting systems prior to the use of a cost reimbursement contract to certifying invoices for payment.) Not only was the number of internal control deficiencies widespread, but also many contract actions had more than one deficiency. We also estimate that at least 37.2 percent of FAR-based contract actions made in fiscal year 2008 had three or more instances in which a key control was not adequately implemented. The high percentage of deficiencies indicates a serious failure of control procedures over FAR-based acquisitions, thereby creating a heightened risk of making improper payments or waste. We determined a control to be “key” based on our review of the standards for internal control as well as the FAR, HHSAR, and agency policies and whether inadequate implementation would significantly increase the risk of improper payments or waste. We also took into consideration prior audit findings and the contract types CMS most frequently used. See appendix I for additional details on the controls we tested and the statistical sample results. We project the results of our statistical sample conservatively by reporting the lower bound of our two-sided, 95 percent confidence interval. The control deficiencies we found were primarily caused by a lack of agency-specific policies and procedures that would help ensure that applicable FAR requirements, agency policies, and other control objectives were met. CMS did not always meet FAR requirements for specific contract types that were awarded, nor maintain adequate support for approved provisional indirect cost rates, which are necessary to determine the reasonableness of indirect costs billed on invoices. Additionally, CMS did not timely perform or request audits of incurred direct and indirect costs, which provide assurance that costs billed by the contractor are allowable and reasonable under the terms of the contract and applicable government regulations. These control deficiencies are discussed in detail below and the results of the other control procedures we tested can be found in appendix I. We estimate that at least 46.0 percent of fiscal year 2008 CMS contract actions did not meet the FAR requirements applicable to the specific contract type awarded. Sixteen contract actions we tested had deficiencies—of which 1 related to a letter contract, 9 related to cost reimbursement contracts, and 6 related to T&M contracts. In the case of the letter contract, the contract file did not contain the authorization for use of the letter contract, which is required by HHSAR. In the case of cost reimbursement contracts, the FAR states that a cost reimbursement contract may be used only when the contractor’s accounting system is adequate for determining costs applicable to the contract. Of the contract awards in our sample, we found 9 cases in which cost reimbursement contracts were used without first ensuring that the contractor had an adequate accounting system. An adequate contractor accounting system is key to the government’s ability to perform the various contract oversight activities required by the FAR for cost reimbursement contracts. In particular, contracting officers and other members of the federal agency acquisition workforce rely on the contractors’ contract proposals, interim billings, provisional indirect cost rates, and reports of actual costs incurred (which are used to finalize the direct and indirect costs billed) all of which are generated from data maintained in the contractor’s accounting system. In addition to the 9 cases above, during our review of modifications we observed another 6 cases in which cost reimbursement contracts were used even though CMS was aware that the contractor’s accounting system was inadequate at the time of award. In one instance, the contracting officer was aware that a contractor had an inadequate accounting system resulting from numerous instances of noncompliance with CAS. Using a cost reimbursement contract when a contractor does not have an adequate accounting system hinders the government’s ability to fulfill its oversight duties throughout the contract life cycle. Additionally, it increases risk of improper payments and the risk that costs billed can not be substantiated during an audit. When choosing to use T&M contracts, the FAR requires contracting officers to prepare and sign a determination and finding that no other contract type is suitable for the acquisition. The justification is required to set forth enough facts and circumstances to clearly and convincingly justify the specific determination made. We found that the determination and finding was either not documented or insufficient in six T&M contract awards we reviewed. In cases when the justification memorandum was prepared, contracting officers merely quoted language from the FAR but did not set forth clear and convincing findings—that is, the particular circumstances, facts, or reasoning essential to support the determination—for why other contract types could not be used. When the contracting officer does not clearly and convincingly document the findings that support using a T&M contract type, OAGM does not have assurance that the appropriate contract type was used. In addition, for three of the contract actions, the contract specialist told us that the actions the document listed to mitigate the risk of awarding a T&M contract were not performed. Because CMS did not carry out the stated mitigation strategies used to justify the selection of the T&M contract type, it increased its exposure to the risk of improper payments or waste. We estimate that for at least 40.4 percent of fiscal year 2008 contract actions, CMS did not have sufficient support for provisional indirect cost rates nor did it identify instances when a contractor billed rates higher than the rates that were approved for use. Specifically, for 17 contract actions that utilized indirect cost rates, CMS did not have documentation supporting what would be the appropriate provisional indirect cost rates for the contractor. For an additional 19 contract actions, the provisional rates either did not match the indirect rates billed on the invoices or could not be matched because the invoice did not provide sufficient detail. The FAR states that provisional indirect cost rates shall be used in reimbursing indirect costs such as fringe benefits or overhead costs under cost reimbursement contracts and are used to prevent substantial overpayment or underpayment of indirect costs. These rates are generally established by the CFA, contracting officer, or auditor on the basis of reliable data or previous rate audits and should be set as close as possible to the anticipated final indirect cost rates. Provisional indirect cost rates provide agencies with a mechanism by which to determine if the indirect costs billed on invoices are reasonable for the services provided until such time that final indirect cost rates can be established, generally at the end of the contractor’s fiscal year. Approval of provisional indirect cost rates is important given the fact that indirect costs can be more than 50 percent of the total invoice amount. When the agency does not maintain adequate support for provisional indirect rates, it increases its risk of making improper payments. We estimate that for at least 52.6 percent of fiscal year 2008 contract actions, CMS did not have support for final indirect cost rates. Specifically, 23 contract actions we tested did not have documentation of final indirect cost rates or support for the prompt request of an audit of indirect costs. The FAR states that final indirect cost rates, which are based on the actual indirect costs incurred during a given fiscal year, shall be used in reimbursing indirect costs under cost reimbursement contracts. The amounts a contractor billed using provisional indirect cost rates are adjusted annually for final indirect cost rates providing a mechanism for the government to timely ensure that indirect costs are allowable and allocable to the contract. Final indirect cost rates are generally negotiated by the government’s negotiating team that includes the CFA following an audit of a statement of incurred costs submitted by the contractor. CMS officials told us that they generally adjust for final indirect cost rates during contract closeout at the end of the contract performance rather than annually mainly due to the cost and effort the adjustment takes. Moreover, since final indirect cost rates are established by the CFA, when CMS is not the CFA, CMS must wait on the CFA to perform the necessary audit work required to establish the final indirect cost rates. Not annually adjusting for final indirect cost rates increases the risk that CMS is paying for costs that are not allowable or allocable to the contract. Furthermore, putting off the control activity until the end of contract performance increases the risk of overpaying for indirect costs during contract performance and may make identification or recovery of any unallowable costs during contract closeout more difficult due to the passage of time. We estimate that for at least 54.9 percent of fiscal year 2008 contract actions, CMS did not promptly perform or request an audit of direct costs. We found that 25 contract actions for which this control applied did not have an audit of direct costs promptly performed or requested. Similar to the audit of indirect costs, audits of direct costs allow the government to verify that the costs billed by the contractor were allowable, reasonable, and allocable to the contract. The audit of direct costs is the responsibility of the contracting officer; however, the contracting officer may request, for a fee, that the CFA for the contractor perform the audit work. The FAR does not provide time frames or other requirements for when the audit of direct costs should be performed except that such an audit may be necessary for closing out the contract at the end of contract performance. Not annually auditing direct costs increases the risk that CMS is paying for costs that are not allowable or allocable to the contract. CMS had policies for invoice certification and purchase card oversight; however, these policies were not consistently followed. The failure to follow established agency policy increases CMS’s risk of improperly paying contractor invoices or purchase card transactions. We estimate that for at least 59.0 percent of fiscal year 2008 contract actions, the project officer did not always certify the invoices. CMS’s Acquisition Policy Notice 16-01 requires the project officer to review each contractor invoice and recommend payment approval or disapproval to the contracting officer. This review is to determine, among other things, if the expenditure rate is commensurate with technical progress and whether all direct cost elements are appropriate, including subcontracts, travel, and equipment. Based on his or her review, the project officer is then to approve the invoice for payment by signing a Payments and Progress Certification Form or disapprove by issuing a suspension notice. In one case, although a contractor submitted over 100 invoices for fiscal year 2008, only 8 were certified by the project officer. The total value of the contract through January 2009 was about $64 million. After the project officer’s review, the contracting officer or specialist is also required to review invoices for critical elements such as compliance with the terms of the contract—including indirect cost rates—and mathematical accuracy. Based on a cursory review of the fiscal year 2008 invoices submitted for payment, we found instances in which the contracting officer or specialist did not identify items that were inconsistent with the terms of the contract. For example, facilities capital cost of money is generally disallowed by HHSAR. However, we found two instances where the contractor billed, and CMS paid, this cost. Another contractor submitted invoices under its fixed price contract that were contrary to the payment schedule stipulated in the contract terms. The contract required the contractor to submit four invoices of equal amount every 3 months during the 1- year performance period. However, the contractor submitted one invoice for the entire amount of the contract. Moreover, the invoice was dated prior to the start date of the contract period of performance. CMS increases its risk of making improper payments when it does not properly review and approve invoices prior to payment. OAGM also did not perform required audits and reviews of CMS purchase cards to identify fraud or waste. These audits and reviews are particularly important because of the authorized spending limits. As of July 15, 2009, OAGM’s purchase card program had issued 123 cards with 20 percent having monthly spending limits of at least $50,000. Eight card holders had monthly spending limits of $100,000, the highest spending limit authorized by CMS. Without sufficient oversight of the purchase card program, CMS does not have assurance that only allowable transactions are procured through purchase cards and that the purchase cards are not being used to circumvent FAR competition requirements. The HHS purchase card policy guidance provides that the purchase card coordinator, which at CMS is within OAGM, is required to conduct surveillance of the purchase card program by annually auditing cardholder transactions using such methods as statistical and nonstatistical sampling, data mining, and spot checks; monitoring purchase card usage; and deactivitating purchase cards when appropriate, among other things. The OAGM purchase card coordinator’s supervisor told us that OAGM did not perform the oversight activities because the supervisor viewed those activities as the responsibility of the Office of Financial Management (OFM). We spoke with an OFM official who stated that OFM does not review purchase card transactions for fraud or inappropriate use, but instead pays the purchase card invoice based on the authorizing official’s approval. During the tests of control procedures, we observed that the contract files did not always contain all required documentation to support the contract actions we reviewed. Standards for internal control call for transactions and other significant events to be clearly documented, and the documentation should be readily available for examination. In addition, the FAR provides that the documentation in the contract files shall be sufficient to constitute a complete history of the contract action for the purpose of providing a basis for informed decisions at each step in the acquisition process, and providing information for reviews and investigations, among other things. Clearly documenting the history of a contract action is an important tool that provides management with assurance that the agency has complied with applicable regulations and has made well-informed decisions for efficient contract management. Incomplete or inadequate contract files and documentation hinder the ability of the contracting officers to perform their oversight duties, especially those who assume responsibility for contracts that have changed hands during the life of the contract. CMS contract files did not always contain documentation necessary to support the action and that would provide contracting officers with the tools they needed to adequately perform their oversight functions. Specifically, we found a contract file was missing a statement of work and another file was missing a copy of the actual contract. In addition, two contract files did not maintain any information regarding the General Services Administration schedule contract that was valid at the time of the award of the task order. In numerous instances, we determined that the letter delegating duties to the project officer and the training certificate for the project officer—both of which are required by OAGM policies—were not in the file. Also, a chronological list of contracting officers and their dates of responsibility, which provides an important tool for establishing accountability for contract files over time, was consistently absent. Additionally, we found that CMS’s use of negotiation memorandums was inconsistent. The HHSAR provides that the negotiation memorandum is a complete record of all actions leading to the award of a contract and should be in sufficient detail to explain and support the rationale, judgments, and authorities upon which all actions were predicated and should be signed by the contract negotiator. However, we found that negotiation memorandums were not always prepared for actions in which they were clearly required, and were prepared for actions in which they may not be required, according to HHSAR. Moreover, while many negotiation memorandums we reviewed had signature blocks for both the contract specialist and the contracting officer (generally the preparer and reviewer, respectively) the memorandums were not always signed by the contracting officer. CMS’s OAGM established the Contract Review Board (CRB) reviews as a key control procedure to help ensure contract award actions are in conformance with law, established policies and procedures, and sound business practices. However, our review of the CRB process found that the process had not been properly or fully implemented. For example, of the 22 contracts selected to be reviewed by the CRB in 2008, only 7 were actually reviewed. Similarly, for fiscal year 2009, 22 contracts were selected for the CRB but only 2 have been reviewed as of the end of the third quarter. Also, the contracting officer for the contract action being reviewed is neither required to reach consensus with the CRB on the resolution of issues identified nor to document the justification for not resolving CRB issues. Moreover, CMS is not following its policies for selecting the contracts to be reviewed by the CRB. While OAGM’s policies require that all contracts above $10 million be subjected to the CRB, CMS confirmed that only contracts nominated by division directors are reviewed. If used correctly, the CRB can be an effective tool for risk-based quality assurance and for reviewing the internal controls throughout the contract award and administration process. However, because CMS policies do not require issues to be resolved and documented and because CMS is not fully implementing the CRB, opportunities to identify and fix deficiencies in the contract administration process and to improve internal controls may be missed. In addition to the deficiencies in contract-level control procedures as discussed previously, CMS’s FAR-based contract management was impaired by a weak control environment. CMS’s control environment is characterized by the lack of strategic planning to identify necessary staffing and funding, a lack of reliable data for effectively carrying out contract management responsibilities, very limited actions taken on the recommendations we made in 2007 related to contracting and payments to contractors, and a lack of procedures for managing contract audits which are essential to managing and overseeing the growing value of contracting activities. A positive control environment sets the tone for the overall quality of an entity’s internal control, and provides the foundation for an entity to effectively manage contracts and payments to contractors. Without a strong control environment, the control deficiencies we identified during this review will likely persist. OAGM management has not analyzed its contract management workforce and related funding needs through a comprehensive, strategic acquisition workforce plan. Such a plan is critical to help manage the increasing acquisition workload and meet its contracting oversight needs. We reported in November 2007 that staff resources allocated to contract oversight had not kept pace with the increase in CMS contract awards. A similar trend continued into 2008. While the obligated amount of contract awards has increased 71 percent since 1998, OAGM staffing resources—its number of full time equivalents (FTE)—has increased 26 percent. This trend presents a major challenge to contract award and administration personnel who must deal with a significantly increased workload without additional support and resources. While CMS has data on its workforce changes since January 2007 (attritions and additions), documentation requesting additional FTEs for a specific project, and, in its fiscal year 2010 budget, a request to hire contract support staff to help meet contract and grant administration needs, CMS has not yet determined the amount of total FTEs needed for the fiscal year and beyond. For example, the documentation did not contain an analysis of the workload anticipated for the year, such as the total number of new awards, the number of active contracts by contract type, the number of CMS contracts under HHS’s cognizance, or the number and type of audits needed. The documents did not contain information on CMS’s current FTE level, skill mix, or analysis of any skill gaps. Without this information, OAGM has limited insight into appropriate solutions, such as the use of contractor support staff. While the use of contractor support staff has in recent years become commonplace in the federal government, we have previously reported that using contractors for contract administrative functions may increase the risk of establishing unauthorized personal services contracts or the risk of contractors performing inherently governmental functions, both of which are prohibited by FAR. According to its staff and management, OAGM is challenged to meet the various audit requirements necessary to ensure adequate oversight of contracts that pose more risk to the government, specifically cost reimbursement contracts, as well as perform the activities required of a CFA. While officials told us they could use more audit funding, we found that OAGM management had yet to determine what an appropriate funding level should be. Without knowing for which contractors additional CFA oversight is needed, CMS does not know with certainty the number of audits and reviews that must be performed annually or the depth and complexity of those audits. Without this key information, CMS can not estimate an adequate level of audit funding that it needs. During interviews and our on-site review of contract files, we were told by OAGM senior management and contracting officers and specialists that the first set of activities that the contracting officers and specialists tend to neglect under resource constraints was post-award administration and contract closeout. Moreover, while OAGM management told us that staff worked hard to comply with its instructions to follow all applicable FAR requirements, CMS staff told us they take shortcuts due to resource constraints. For example, one contract specialist told us she prepared the Independent Government Cost Estimate based on the winning contractor’s proposed costs instead of conducting her own independent research to determine the government’s benchmark for the reasonableness of the costs of the scope of work. Additionally, as previously discussed, CMS officials told us that incurred cost audits are not performed annually primarily due to insufficient resources. A shortage of financial and human resources creates an environment that introduces vulnerabilities to the contracting process, hinders management’s ability to sustain an effective overall control environment, and ultimately increases risk in the contracting process. Although CMS has generally reliable information on basic attributes of each contract action, such as vendor name and obligation amount, CMS lacks reliable management information on other key aspects of its FAR- based contracting operations, including the number of certain contract types awarded, the extent of competition achieved, and total contract value. Standards for internal control provide that for an agency to manage its operations, it must have relevant, reliable, and timely information relating to the extent and nature of its operations, including both operational and financial data, that should be recorded and communicated to management and others within the agency who need it and in a form and within a time frame that enables them to carry out their internal control and operational responsibilities. The acquisition data errors are due in part to a lack of sufficient quality assurance activities over the data entered into the acquisition databases. Without accurate data, CMS program managers do not have adequate information to identify and monitor areas that pose a high risk of improper payments or waste. Moreover, inaccurate or incomplete data hinder CMS’s ability to mitigate through additional policies or enhanced oversight any high-risk areas, such as the frequent use of cost reimbursement contracts, that would be identified based on reports or analysis of the databases. The errors in DCIS, including the unrecorded actions, also impact governmentwide reporting. The Office of Management and Budget (OMB) requires agencies to submit their acquisition data to the Federal Procurement Database System-Next Generation (FDPS-NG). Since HHS submits DCIS data to the FDPS-NG, which in turn feeds into OMB’s publicly available database at www.usaspending.gov, the DCIS errors noted above are provided to the public and limit the usefulness and transparency of this important tool. We estimate that for at least 34.9 percent of fiscal year 2008 contract actions, PRISM contained at least one error in the selected critical fields we reviewed. In particular, we found that PRISM contained 16 errors in a field we reviewed that designated the extent to which the contract was competed, for example, full and open competition or not competed as a result of being a logical follow-on to a previous contract. Additionally, we determined that the award type field in PRISM did not capture consistent information. For example, the field had prepopulated options associated with both award type (basic ordering agreement, delivery order, letter contract, etc.) and contract type (cost reimbursement, fixed price, and T&M). Combining these options into one data field prevents CMS from determining the total number of each award type and each contract type, making it difficult to accurately determine CMS’s contracting trends. OAGM officials told us that the data entered into PRISM are not subjected to a secondary review in which the data entered are compared to the information in the contract file. We estimate that for at least 54.2 percent of fiscal year 2008 contract actions, DCIS contained at least one error in the selected critical fields we reviewed. DCIS contained errors in current contract value and ultimate contract value fields, as well as the extent of competition, contract type, and award type fields. Further, 11 sample items, or approximately 10 percent of the sample, were not in DCIS. Our high-level data analysis on the population of fiscal year 2008 contract actions identified that certain required fields, such as contract type and competition, contained blank responses and “nulls”. We also noted obvious errors. For example, CMS entered codes for “potato farming” and “tortilla manufacturing” in the industry code field for two contract actions. Prior to calendar year 2008, CMS did not have quality assurance activities, such as formal data entry reviews or database training, over the data contained in the DCIS database. In December 2007, OAGM established a Verification and Validation Plan for DCIS Accuracy Improvements (V&V). The V&V plan contained several actions, including a secondary review of data entered into DCIS for every 50th contract action. The V&V plan lacks key elements and controls to ensure that the resolution of potential errors is properly documented and errors are corrected in a timely manner. For example, OAGM officials could not determine if all errors identified during the file reviews were properly resolved and the appropriate adjustments to DCIS were made. Additionally, while staff training was provided in January 2008, the DCIS data entry instructions were later modified with new information. In one instance, we noted that the DCIS preparer and the reviewer were using different versions of the instructions resulting in confusion over what would be the appropriate DCIS entry. OAGM officials provided us with the results for the V&V plan for 2008, which showed that 23 of the total 2,031 contract actions entered into DCIS in 2008 were reviewed for accuracy, which is approximately every 88th action. As of July 22, 2009, CMS management had not taken substantial actions to address our prior recommendations to improve internal control in the contracting process. Only two of GAO’s nine 2007 recommendations had been fully addressed. Table 1 summarizes, and appendix II provides additional detail on, our assessment of the status of CMS’s actions to address our recommendations. The seven substantially unresolved recommendations represent a lack of action on the part of CMS management to resolve key control deficiencies. Policies and criteria for pre-award contract activities have not been developed. In our 2007 report, we recommended that CMS develop policies for certain pre-award contract activities, such as analysis to justify the contract type selected and verification of the adequacy of the contractor’s accounting system prior to the award of a cost reimbursement contract. However, no new policies or guidance were developed, because in CMS’s view, policies and criteria are already established in the FAR and HHSAR. While the FAR provides requirements for federal acquisitions, it is up to the agencies to develop and provide their contracting workforce with specific policies and day-to-day procedures that guide them in implementing those requirements and to tailor the policies to address the specific operational environment. Agency-specific policy may include guidance on applicable approval levels, time frames, agency forms, and routing processes. Also, while the HHSAR provides additional guidance and policies specific to HHS, the HHSAR does not specifically address all of the pre-award contract activities that we identified as needing improvement, nor does it delineate the roles and responsibilities of the different staff involved in the contracting process or establish time frames for when certain pre-award contract activities should be performed. The deficiencies identified in this report, especially those associated with FAR requirements unique to specific contract types, further highlight the need for additional guidance for contracting officers. Roles and responsibilities for implementation of CFA responsibilities not clearly defined. The FAR requires that CFAs perform certain oversight and monitoring activities. The CFA concept provides an efficient way for contractors to receive a streamlined set of audits and reviews, thereby enabling them to receive and perform government contracts. In our 2007 report, we found that CFA responsibilities were inadequately fulfilled and recommended that CMS develop policies and procedures to ensure that CFA responsibilities were performed. In a recommendation resolution report, HHS stated that policies and procedures were needed at both the department level and at CMS. As of July 2009, neither HHS nor CMS had developed such policies and procedures or a mechanism to track the CMS contractors for which additional oversight is needed. Moreover, roles and responsibilities for the performance of CFA duties were not clear among HHS and its components, including CMS. During an interview with CMS, HHS, NIH, and HHS Office of Inspector General (OIG) officials, HHS officials stated that CFA responsibilities lie at the HHS department level. However, HHS officials also said that certain CFA responsibilities are delegated to HHS components and to contracting officers. Specifically, NIH was assigned responsibility to establish indirect rates for the contractors under HHS’s cognizance, but contracting officers within HHS components are responsible for other CFA duties. However, during the meeting, the officials could not clearly explain how the performance of these duties was monitored to ensure that CFA oversight takes place. HHS officials said that they did not have a process to identify the contractors, including CMS contractors, for which HHS would be the CFA. Without a list that is periodically updated for the contractors’ portfolio of federal government contracting activity, HHS and its components do not know the contractors for which CFA oversight is needed. NIH officials acknowledged their centralized role in determining indirect rates, but noted that NIH did not have the resources necessary to determine the indirect rates for the contractors under HHS’s cognizance. CMS officials told us that when NIH can not perform the reviews within the needed time frames to make timely contract awards, CMS’s cost/price team establishes the rates. The confusion over roles and responsibilities increases the risk that CFA responsibilities are not being timely performed, if at all. Without effective coordination, contractors may not receive the necessary oversight and the government may not be positioned to protect itself from the risk of improper payments or waste. The risks of not performing CFA duties are exacerbated by the fact that other federal agencies that use the same contractors rely on the oversight and monitoring work of the CFA. CMS policies did not provide guidance on what constitutes sufficient detail to support amounts billed on contractor invoices to facilitate the review process. Despite our prior recommendation, CMS had not prepared guidelines or revised its invoice review policy to specify or provide examples of sufficient detail that would be needed to support contractor invoices to facilitate an adequate review. In fact, most of the invoices we reviewed were not sufficiently supported. We identified invoices missing payroll detail, travel receipts, and subcontractor invoices, all of which are necessary to provide the reviewers adequate information to confirm if the amounts billed were compliant with the terms of the contract or otherwise allowable and allocable to that contract. In one instance, invoices reported labor costs based on labor categories, but did not show hours worked by employees or their respective labor rates. In another example, a contractor submitted an invoice in 2008 for services that were provided in 2003. The contractor did not provide supporting documentation for the $36,944 billed. Neither the invoice paid in 2008, nor the related file included evidence that the charge was investigated or further evaluated by either the project officer or contracting specialist. While different levels of review may be required based on the complexity of individual invoices and associated contract type, inadequately reviewing invoices increases the risk of improper payments. CMS has not set criteria for the use of negative certification. We recommended in our 2007 report that CMS establish criteria for the use of negative certification in the payment of contractor invoices which would consider potential risk factors. CMS uses negative certification—a process whereby it pays contractor invoices without knowing whether they were reviewed and approved—in order to ensure invoices are paid in a timely fashion. This approach, however, significantly reduces the incentive for contracting officers, specialists, and project officers to review the invoice prior to payment. Reviewing invoices prior to payment is a preventive control which may result in the identification of unallowable billings, especially on cost reimbursement and T&M invoices, before the invoices are paid. In light of the importance of this preventive control, we recommended that CMS establish criteria for when to use negative certification; such criteria may be based on considerations of potential risk factors such as contract type, the adequacy of the contractor’s accounting system, and prior history with the contractor. We found, however, that OAGM’s invoice review policy was not revised to address this recommendation and OAGM officials confirmed that negative certification is still the primary method for paying invoices regardless of risks. Training on invoice review procedures still needed. As discussed earlier, project officers did not always certify invoices for approval and contracting officers or specialists did not always identify instances where invoices did not comply with contract terms and conditions. We also found that invoices were not always maintained in the file, as required by CMS’s invoice review policy. In light of these continuing deficiencies, and the need for further revisions to its invoice review policy described above, further training on invoice review procedures will be necessary. Continuing backlog of contracts overdue for closeout. In 2007, we reported that CMS did not timely perform contract closeout procedures resulting in a backlog of 1,300 contracts, of which 407 were overdue for closeout as of September 30, 2007. We recommended that CMS develop a plan to reduce the number of contracts in the backlog. CMS did not provide us a closeout plan for fiscal year 2008 and the fiscal year 2009 plan was insufficient. Specifically, the plan did not include a comprehensive strategy to reduce the backlog of contracts that are eligible and overdue for closeout nor did it contain a workload analysis, such as a list of contracts eligible for closeout by contracting officer or specialist or an estimate of the number of hours or audit funds it would need to close the contracts. The FAR establishes time standards for closing out a contract after the work is physically completed (i.e., goods or services are provided). The contract closeout process is an important internal control, in part, because it is generally the last opportunity for the government to detect and recover any improper payments. The complexity and length of the closeout process can vary with the extent of oversight performed by the agency during the period of performance and the contract type. CMS officials told us that during fiscal year 2008, OAGM closed 581 contracts and reduced the overdue backlog to 400 contracts (from the 407 reported at the end of fiscal year 2007). Yet OAGM officials could not provide support for these closures or a list of the contracts overdue for closeout. Additionally, CMS officials stated that as of July 29, 2009, the total backlog of contracts eligible for closeout was 1,611, with 594 overdue based on FAR timing standards. This is a substantial increase over the balances at the end of fiscal year 2007. Moreover, the total contract value of contracts eligible for closeout has increased from $3 billion to at least $3.8 billion. Insufficient progress has been made to reduce the backlog of contracts eligible for closeout. The closeout process is particularly important for cost reimbursement contracts because a contractor is allowed to bill costs it incurred to provide the good or service. During the closeout process, the government audits these billed costs to determine if they were allowable and allocable to the contract, and processes the final invoice with an adjustment for any over- or underpayments. The failure to perform contract closeouts in a timely manner puts CMS at increased risk of improper payments or waste, and may make identification and recovery of any such improper payments more difficult due to the passage of time. CMS has not taken sufficient actions to investigate and recover questionable payments. CMS described several actions it has taken to investigate payments made to 3 of the 12 contractors for which we identified questionable payments. The actions CMS has taken to date are insufficient to fully resolve the issues identified and more remains to be done to recover funds that may have been inappropriately paid to contractors. For example, CMS highlighted $67 million in questionable payments that were related to one specific contractor and stated that these questionable payments are being investigated via a fiscal year 2008 incurred cost audit. However, the $67 million related to costs incurred in fiscal years 2004, 2005, and 2006 and therefore would not be covered or investigated in an audit of fiscal year 2008 incurred costs. Additionally, CMS said it had resolved the questionable payments made to another contractor; however, CMS’s actions did not relate to the $1.4 million in payments CMS made in fiscal year 2006 that we questioned. Regarding a third contractor, CMS issued a demand letter in April 2007 to recover funds the contractor billed and CMS paid in excess of contract ceiling limits; however, no resolution has yet been reached. CMS could not tell us whether it had recovered any of the questioned amounts. CMS’s resolution of questionable payments of the magnitude we identified ($88.8 million) in the prior report should be performed expeditiously. As a steward of taxpayer dollars, CMS is accountable for how it spends and safeguards funds as well as having mechanisms in place to recoup those funds when improper payments are identified. CMS relies on incurred cost audits that are conducted at the end of contract performance when the contract is closed to validate the overall propriety of payments. As discussed earlier, incurred cost audits are best conducted annually, rather than at the end of contract performance. CMS’s backlog of contracts eligible for closeout delays investigations and makes recovery more difficult. CMS does not track, investigate, and resolve contract audit and evaluation findings for purposes of cost recovery and future award decisions. Tracking audit and evaluation findings strengthens the control environment in part because it can help assure management that the agency’s objectives are being met through the efficient and effective use of the agency’s resources. It can also help management determine whether the entity is complying with applicable acquisition laws and regulations. Contract audits and evaluations can add significant value to an organization’s oversight and accountability structure, but only if management ensures that the results of these audits and evaluations are promptly investigated and resolved. During our review of the contract files, we noted that audits and evaluations CMS requested of organizations such as DCAA or performed by the CMS cost/price team identified questionable payments, accounting system deficiencies, and other significant weaknesses or deficiencies associated with certain CMS contractors. However, we could not consistently determine how the contracting officer or other OAGM staff followed up on the results of these audits and noted that CMS was not always taking the results of these audits and evaluations into consideration when making decisions relating to future contract awards. For example, in an audit report dated September 30, 2008, DCAA questioned approximately $2.1 million of costs that CMS paid to a contractor in fiscal year 2006. OAGM management confirmed that no action has been taken to investigate and recover the challenged costs. In another instance, the contracting officer—based on the results of a cost/price team evaluation of a contractor’s technical capability and negative results of DCAA audits—deemed the contractor “risky” during the pre-award contract proposal evaluation process. Nevertheless, the contracting officer awarded the cost reimbursement contract to this “risky” contractor. We found no evidence of any plans or procedures that would mitigate the identified risks. CMS has not established a formal procedure or system for tracking and pursuing the results of contract or contractor audits and had not provided its contracting officers guidance or procedures for when to request the assistance of internal and external audit and evaluation services. For example, OAGM did not provide direction on when (what stage(s) in the contract life cycle and under what circumstances) the contracting officer should utilize the service of the cost/price team or other contract auditors. By not timely acting on audit results or fully incorporating knowledge identified by cost/price evaluations or other audits into award decisions, CMS is forgoing the potential benefits from those audits and evaluations. A well-established tenet for recovery of improper payments is that it becomes increasingly more difficult with the passage of time. Careful and prompt consideration of audit results, including tracking and pursuing findings, helps to reduce the risk of improper payments or waste, and making other-than-the-best award decisions. The contract-level and overall control environment weaknesses we found significantly increase CMS’s vulnerability to improper or wasteful contract payments. To address these deficiencies, CMS will need to develop and implement CMS-specific policies and procedures to ensure that contract actions are properly administered and comply with applicable requirements. CMS also needs to strengthen its overall contract management control environment, including developing strategic workforce plans, establishing appropriate contract management oversight procedures, and maintaining reliable management information. In addition, CMS management has made limited progress in substantively addressing most of the broad-based recommendations from our 2007 report. We found that many of our findings in this review could be, at least in part, attributed to CMS management’s lack of attention given to resolving the control deficiencies. Consequently, we are reiterating our previous recommendations to (1) develop policies for pre-award contracting activities, (2) develop policies to help ensure CFA responsibilities are performed, (3) prepare guidelines on what constitutes sufficient detail to support contractor invoices, (4) establish criteria for the use of negative certification, (5) provide training on revised invoice review policies, (6) develop a plan to reduce the backlog of contracts eligible for closeout, and (7) review the questionable payments identified in the prior report to determine if payments are recoverable. The continuing weaknesses in contracting activities and limited progress in addressing known deficiencies raise questions concerning whether CMS management has established an appropriate “tone at the top” regarding contracting activities. Until CMS management addresses our previous recommendations in this area, along with taking action to address the additional deficiencies identified in this report, its contracting activities will continue to pose significant risk of improper payments, waste, and mismanagement. Further, the deficiencies we identified are likely to be exacerbated by the rise in obligations for non-claims processing contract awards as well as CMS’s extensive reliance on contractors to help achieve its mission objectives. It is imperative that CMS take immediate action to address its serious contract-level control deficiencies and take action on our previous recommendations to improve contract-level and overall environment controls or CMS will continue to place billions of taxpayer dollars at risk of fraud, or otherwise improper contract payments. We make the following nine recommendations to the Administrator of CMS to develop and implement policies and procedures to ensure that FAR requirements and other control objectives are met. Policies and procedures should: Document compliance with FAR requirements for different contract types. At a minimum, enhance current documentation, such as the contract checklist, to ensure the contract file documents authorizations for letter contracts, adequacy of the contractors accounting systems, and determination and findings for time and materials contracts, when applicable. Document in the contract file provisional indirect cost rates used as a basis for reviewing the reasonableness of the indirect costs billed on the contractor invoices. Specify what constitutes timely performance of (or request for) audits of contractors’ statements of incurred cost for cost reimbursement and T&M contracts, including circumstances when OAGM should perform the audit itself or request another organization to perform the service. Specify circumstances under which negotiation memorandums should be used and the content of such, and any required secondary reviews, in light of HHSAR requirements and current OAGM practice. Specify Contract Review Board review documentation to include, at a minimum, documentation of the number of contracts reviewed each year, the issues identified by the CRB reviewer(s), and resolution of issues identified during the CRB reviews. Require Division Directors to periodically assess, document, and report to senior management on the results of their review of whether the contract files contain documentation that invoices were properly reviewed by both the project officer and contracting officer or specialist. To strengthen the control environment, we recommend that OAGM management: Develop and implement a comprehensive strategic acquisition workforce plan. The plan should include, at a minimum, elements such as performance goals, time frames, implementation actions, and resource requirements, and address issues such as OAGM workload, full time equivalents needed, and a workforce skills analysis, as well as an estimate of the amount of resources OAGM needs to fulfill the audit and other FAR requirements for comprehensive oversight, including those required of a CFA. Revise the Verification and Validation Plan for DCIS Accuracy and Improvements policy to require all relevant errors be corrected and their resolution documented. Develop and implement policies and procedures for tracking contract audit requests, monitoring the results of contract audits and evaluations, and resolving the audit findings, to include roles and responsibilities of the contracting officer, specialist, and members of the cost/price team. We make the following recommendation to the Secretary of HHS to improve the department’s fulfillment of CFA duties as described in FAR. Develop policies and procedures that clearly assign roles and responsibilities for the timely fulfillment of CFA duties, and that include the preparation of and periodic update of a list of contractors for which the department is the CFA. In written comments on a draft of this report (reprinted in their entirety in appendix III), CMS and HHS agreed with each of our 10 new recommendations and described steps planned to address them. CMS also stated that the recommendations will serve as a catalyst for improvements to the internal controls for its contracting function. In its comments, CMS also expressed concerns about the scope and timing of our work with respect to our November 2007 recommendations and disagreed with our assessment of the status of 5 of the 7 recommendations we made in that report. We address the concerns CMS raised in its comment letter below and include additional information at the end of appendix III. In its comments, CMS stated its belief that the 11 internal controls we reviewed did not provide a complete picture of its internal controls over contract management activities. We acknowledge that there are many internal controls that are and can be instituted by agencies to help safeguard assets, prevent and detect fraud and errors, and help government program managers achieve desired results through effective stewardship of public resources. As described in appendix I, we selected 11 controls that we determined to be “key” based on GAO’s standards for internal control, the FAR and HHSAR, CMS’s policies and procedures, and other factors including our prior audit findings regarding CMS’s acquisition controls and the nature of CMS’s acquisition function. CMS stated its belief that “virtually all” of the errors we identified related to “perceived documentation deficiencies.” CMS stated it was encouraged that the errors we found did not involve more substantive departures from the FAR or HHSAR. We disagree with CMS’s overall assessment of our findings and message of the report. The internal controls we tested are key to ensuring that contracting activities, both pre-award and post-award, mitigate risks to the federal government. A number of the findings we identified during the testing of a statistically valid sample of contract files involved the lack of documentation that the controls were performed. Lack of documentation reduces management’s ability to ascertain whether these important controls were appropriately implemented and therefore is a serious internal control deficiency. OAGM management’s downplaying of the overall message of the report—that control deficiencies are pervasive—further illustrates the weak internal control environment. Setting an appropriate control environment, especially “tone at the top,” is key to ensuring that staff take all appropriate steps to mitigate risk and protect tax dollars from fraud, waste, and abuse. CMS also stated that a reasonable amount of time had not yet elapsed since the issuance of our November 2007 report to allow for corrective actions to have taken place. A significant number of our current report findings, including weaknesses in the control environment, were based on observations and interviews with OAGM officials and reviews of related documentation such as policies and strategic plans. Our current review was completed in September 2009, nearly 2 years after the issuance of our November 2007 report. While CMS also stated that the contract actions we reviewed took place in fiscal year 2008, it is important to note that we considered the timing of CMS’s corrective actions when evaluating the controls we tested. For example, CMS’s Acquisition Policy 02-03, which identifies level of approvals required by agency officials based on the estimated dollar value for acquisitions awarded through other than full and open competition, was implemented in April 2008. We applied these approval levels only to the awards and modifications in our sample that were made after the policy was implemented. Furthermore, our observations and recommendations related to CMS’s control environment are based on conditions that continued to exist in September 2009. CMS disagreed with our determination that their actions to address five of the seven prior recommendations were not sufficient. These prior recommendations were aimed at improving preventive controls. Preventive controls, such as policies and criteria for pre-award activities and a sound invoice review process prior to payment, are the first line of defense in reducing the risk of improper payments or waste. We continue to believe that the limited actions OAGM management has taken, and in some cases, management’s inaction, fall short of expectations and miss the intent of improving CMS’s overall system of control over its acquisition activities. For example, CMS asserted that its Acquisition Policy 16-01 entitled “Invoicing Payment Procedures” satisfies two of the prior recommendations. The intent of these two recommendations was to ensure that contractors provided adequate support to facilitate an appropriate detailed review of the invoiced costs prior to payment and that CMS develop clear risk-based criteria for the use of negative certification. CMS uses negative certification—a process whereby it pays contractor invoices without knowing whether they were reviewed and approved—in order to ensure invoices are paid in a timely fashion. We examined this policy during our review and found it to be unresponsive to the recommendations because it did not provide the recommended additional guidelines on what the contractor should provide that would constitute sufficient detail to support amounts billed on contractor invoices. It also did not describe under what circumstances or in what situations it was acceptable for CMS to use negative certification. With regard to a third prior recommendation that CMS review the questionable payments we identified, CMS described in its comment letter specific actions taken to investigate some of the questionable payments and subsequently provided documentation of actions it had taken to investigate the questionable payments we identified for three contractors. After reviewing this information, we revised our assessment of the status of efforts taken by CMS from “No Actions Taken” to “Actions Insufficient.” While CMS had taken some action, the steps have not resolved the questionable payments we identified. For example, CMS highlighted $67 million that we previously questioned that was related to one specific contractor and stated that these questionable payments are being investigated via a fiscal year 2008 incurred cost audit. The $67 million we questioned related to costs incurred in fiscal years 2004, 2005, and 2006 and therefore would not be covered or investigated in an audit of fiscal year 2008 incurred costs. Moreover, as of the date of the report, CMS could not tell us whether it had recovered any of the questioned amounts. We continue to believe that CMS’s actions to date are insufficient and more actions are needed to investigate and recover the questionable payments we identified. No other changes were made to the report as a result of agency comments. See appendix III for a discussion of the remaining two prior recommendations (points 2, 3, and 4) for which CMS disagreed with our assessment of its progress and our analysis of comments CMS made on our new recommendations. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services, Administrator of the Centers for Medicare and Medicaid Services, and interested congressional committees. Copies will also be available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9095 or dalykl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix IV. To determine the extent to which the Centers for Medicare and Medicaid Services (CMS) implemented effective internal control procedures over contract actions, we focused on contracts that were generally subjected to the Federal Acquisition Regulation. We also interviewed senior management of CMS’s Office of Acquisition and Grants Management (OAGM), contracting officers and specialists, and cost/price team members as well as officials in the Office of Acquisition Management and Policy at the Department of Health and Human Services (HHS). We selected 11 internal controls over contracting and payments to contractors to test for this report, ranging from ensuring contractors had adequate accounting systems prior to the use of a cost reimbursement contract to certifying invoices for payment. We selected controls to test based on our review of GAO’s standards for internal control, the Federal Acquisition Regulation requirements, and agency policies and procedures, taking into consideration prior audit findings and the contract types most frequently awarded. The controls we tested are key to effective administration of the contract in that the lack of implementation would significantly increase the risk of improper payments or waste. To test internal control procedures over contract actions, we selected a stratified random sample of 102 contract actions totaling $140.7 million in fiscal year 2008 obligations from a population of 2,441 contract actions totaling $2.5 billion in fiscal year 2008 obligations. We stratified the contract actions by type of action, namely contract awards and contract modifications, recorded in CMS’s PRISM database from October 1, 2007, through September 30, 2008. Each contract action was either a new contract award or modification to an existing contract. With this probability sample, each contract action in the sample frame had a non- zero probability of being included and that probability could be computed from any contract action. Each stratum was subsequently weighted in the analysis to account statistically for all the contract actions in the sample frame, including those that were not selected. Results from this statistical sample were projected to the population of contract actions made from October 1, 2007, through September 30, 2008. See table 2 for specific details related to contract actions selected in the sample. sample item. Table 3 provides further details on the control procedures we tested, the criteria or source for the procedure, and detailed results. We also obtained information from agency officials regarding contract closeout, cognizant federal agency responsibilities, audit funding, and staff resources. We used the internal control standards as a basis for our evaluation of CMS’s contract management control environment. We assessed the reliability of CMS’s two acquisition databases, Departmental Contracts Information System (DCIS) and PRISM by (1) performing electronic testing of required data elements, and (2) interviewing both CMS and HHS officials on quality assurance activities performed on the databases. In addition, we used a statistically random sample selected to test the application of controls to also test the accuracy of the data in the systems. We determined that only basic contract information maintained in the PRISM database, such as vendor name and obligation amount, was reliable for purposes of this report. The historical obligation amounts presented in the background section of the report come primarily from CMS’s PRISM database. We requested comments on a draft of this report from HHS and CMS. We received written comments on October 2, 2009, and have summarized those comments in the Agency Comments and Our Evaluation section of this report. Our response to certain specific CMS comments appears in the GAO Comments section of appendix III. We conducted this performance audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our audit work in Washington, D.C. and Baltimore, Maryland from July 2008 through September 2009. No action taken. The Department of Health and Human Services (HHS) reported that (a) the policy and criteria for pre-award contracting activities are already established in the Federal Acquisition Regulation (FAR) and the Health and Human Services Acquisition Regulation (HHSAR), (b) that existing policies would be reviewed and changes would be made as appropriate, and (c) certain pre-award activities, such as the need for adequate accounting systems for cost reimbursement contracts, would be reviewed with staff at internal training sessions. While the Office of Acquisition and Grants Management (OAGM) did conduct internal training on various pre-award activity topics, such as the proper circumstances to use sole source contracts, the Centers for Medicare and Medicaid Service’s (CMS) actions are unresponsive to the recommendation. OAGM still has not developed policies and criteria that provide clear procedures for staff to follow during the pre-award stage, such as applicable approval levels, time frames, agency forms, and routing processes. Furthermore, while FAR and HHSAR provide regulations agencies must follow, it is up to agency management to develop agency-specific policies and other guidance that implement those regulations. HHS reported that while HHS is the CFA for CMS’s contractors, policies and procedures need to be developed both at the department level and at CMS. Further it stated that to the extent CMS is designated to perform functions supporting HHS as the CFA, CMS will develop appropriate procedures for monitoring Cost Accounting Standards compliance and for coordinating efforts with other agencies. Actions insufficient. No policies and procedures developed. Neither HHS nor CMS has developed policies that clearly define key areas of authority and duties for the CFA responsibilities. Moreover, neither HHS nor CMS has developed a list of contractors for which HHS is the CFA. Completed. HHS reported that CMS revised its invoice review policy to better define roles and responsibilities. We reviewed the CMS revised invoice review policy and determined that new invoice payment procedures contain clear roles and responsibilities. No action taken. HHS reported that CMS revised its invoice review policy. CMS’s actions are unresponsive to the recommendation. The revised policy does not specify the documentation the contractors would be required to submit to support the invoices or what would be needed for the project officer, contracting specialist, or contracting officer to validate information in the invoices. No action taken. HHS reported that CMS revised its invoice review policy. CMS’s actions are unresponsive to the recommendation. The revised policy still contains the use of negative certification as a default. This policy does not provide criteria to consider potential risk factors for the use of negative certification in the review of contractor’s invoices and discuss circumstances that warrant the use of this method. HHS reported that CMS provided invoice review training to the OAGM staff and project officers on May 7 and May 15, 2008. Actions insufficient. Actions taken do not achieve intent of recommendation. According to the OAGM Internal Training schedule, they provided training on invoice review procedures. However, since CMS has not addressed two of the three recommendations on invoice review— specifically, guidelines to contracting officers on what constitutes sufficient detail to support amounts billed and establishing criteria for the use of negative certification (see above)—actions taken do not achieve the intent of the recommendation. Completed. HHS reported that they have implemented the Acquisition Career Management Information System (ACMIS). ACMIS is a centralized tracking mechanism that maintains training records for the personnel assigned to contract activities. HHS’s implementation of the centralized system to track training addressed our recommendation. Actions insufficient. HHS reported that CMS developed a plan to reduce the backlog of contracts overdue awaiting closeout and that CMS reduced this backlog by the end of fiscal year 2007 from 581 to 407 contracts. CMS provided its fiscal year 2009 contract closeout plan; however, the plan did not include a comprehensive strategy to reduce the backlog of contracts that are eligible and overdue for closeout. For example, it did not contain a workload analysis, such as a list of contracts eligible for closeout by contracting officer or specialist or an estimate of the number of hours or audit funds it would need to close the contracts. The fiscal year 2009 plan only contained three bullets stating that OAGM would provide quarterly reports to the division directors and training to OAGM staff. It also stated that OAGM would establish “a contract closeout day.” Furthermore, as discussed in the body of this report, the backlog of contracts overdue for closeout persists. Actions insufficient. HHS reported that CMS will review the questionable payments identified in GAO-08-54 to determine whether CMS should seek reimbursement from the contractors. It further stated that the questionable costs will be identified in the course of incurred cost audits which CMS will obtain in the course of closing out the contracts. Additionally, CMS described specific actions taken to investigate some of the questionable payments. The actions taken to investigate questionable payments were either insufficient or incomplete. CMS’s approach to delay investigation and recovery to the end of the contract performance period does not result in timely resolution of questionable payments of this magnitude. Audits and inquiries into the issues we identified in our report should be made as soon as possible. 1. See “Agency Comments and Our Evaluation” section. 2. As we stated in our November 2007 report, we acknowledge that the time frames for implementing the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 added schedule pressures for the Centers for Medicare and Medicaid Services (CMS). At the same time, the compressed time frames and resulting contracting practices added risk to the contracting process. Many of the findings in the November 2007 report were a result of the increased risk together with inadequate compensating controls to mitigate risk. 3. While the Virtual Acquisition Office, which is an off-the-shelf acquisition software that provides Web links to acquisition regulation and templates to aid in completion of common acquisition activities, can be a useful tool for contracting officers, specialists, and the Office of Acquisition and Grants Management (OAGM) management, it does not represent the agency- specific policies and criteria we recommended that CMS implement for pre-award activities. As such, CMS’s actions are unresponsive to the recommendation. Agency-specific policies provide guidance on how CMS staff are expected to perform their day-to-day duties. 4. As discussed in the report, we continue to believe the contract closeout plan does not sufficiently address our recommendation because it did not include a comprehensive strategy to reduce the backlog of contracts that are eligible and overdue for closeout, nor did it contain a workload analysis. The plan was for fiscal year 2009; no other plans were developed. Additionally, CMS stated that it achieved a 30 percent reduction in the number of contracts eligible for closeout since 2007. However, CMS could not fully support the analysis it provided and it related only to the time period between April 2007 and September 30, 2007. According to data provided to us by CMS during this current review, since September 30, 2007, the number of contracts eligible for closeout has increased by 24 percent, from 1,300 in 2007 to 1,611 in 2009. Additionally, the number of contracts overdue for closeout has increased from 407 in 2007 to 594 in 2009, a 46 percent increase. 5. The contract file document checklist employed by CMS at the time of our review identified key documents that may be included in a contract file. This checklist is usually completed by either the contract specialist or contracting officer. While such a checklist is useful for ensuring that certain documents are contained in a contract file, it did not reflect certain requirements in which we found CMS to be deficient, such as ensuring contractors had adequate accounting systems prior to the use of a cost reimbursement contract. We are encouraged that CMS is taking additional actions to implement the checklists developed by the Department of Health and Human Services to be used in fiscal year 2010. 6. During our review and as a result of multiple conversations with OAGM staff including the team of contract auditors, we revised our testing procedures to consider and accept provisional indirect cost rates that were not maintained in the individual contract file but were maintained by the cost/price team in its central files. Therefore, all provisional indirect cost rate determinations that were maintained by OAGM, regardless of location, were considered during our review. The steps CMS described that it plans to take will be important for ensuring that contract office staff have the information needed readily available to manage contracts throughout the contract life cycle. 7. As stated in the report, CMS’s current procedures regarding the accuracy of data entered into the Departmental Contracts Information System (DCIS) do not include procedures that would ensure that the resolution of potential errors are properly documented and errors are corrected in a timely manner. We further found that OAGM was not fully implementing its policy because it reviewed every 88th action, rather than every 50th as provided for in the plan. However, we are encouraged by the recent initiatives described in CMS’s comments, such as the scorecard, and OAGM’s commitment to review current policies and procedures and to make improvements where necessary. Staff members who made key contributions to this report include Marcia Carlsen and Phil McIntyre (Assistant Directors), Sharon Byrd, Richard Cambosos, Abe Dymond, Patrick Frey, Jason Kelly, John Lopez, Ron Schwenn, Omar Torres, Ruth Walk, and Danietta Williams. | As a result of internal control deficiencies discussed in GAO's 2007 report on certain contracts at the Centers for Medicare and Medicaid Services (CMS), GAO was asked to identify the extent to which CMS (1) implemented effective control procedures over contract actions, and (2) established a strong control environment for contract management. GAO used a statistical random sample of 2008 CMS contract actions (including contract awards and modifications) to assess CMS internal control procedures. The results were projected to the population of 2008 CMS contract actions. GAO also determined the extent to which CMS implemented recommendations GAO made in 2007 to improve internal control over contracting and payments to contractors. GAO reviewed contract file documentation and interviewed senior acquisition management officials. Pervasive deficiencies in CMS contract management internal controlincrease the risk of improper payments or waste. Specifically, based on our statistical random sample of 2008 CMS contract actions, GAO estimates that at least 84.3 percent of fiscal year 2008 contract actions contained at least one instance where a key control was not adequately implemented. GAO also estimates that at least 37.2 percent of fiscal year 2008 contract actions had three or more instances in which a key control was not adequately implemented. The contract actions GAO evaluated were generally subject to the Federal Acquisition Regulation. For example, CMS used cost reimbursement contracts without first ensuring that the contractor had an adequate accounting system. Also, project officers did not always certify invoices for payment. These deficiencies were due in part to a lack of agency-specific policies and procedures to help ensure proper contracting expenditures. These control deficiencies also stem from a weak overall control environment as characterized primarily by inadequate strategic planning for staffing and funding resources. CMS also did not accurately capture data on the nature and extent of its contracting, which hinders CMS's ability to manage its acquisition function by identifying areas of risk, due to a lack of quality assurance procedures over data entry. CMS also has not substantially addressed seven of the nine recommendations made by GAO in 2007 to improve internal control over contracting and payments to contractors. For example, CMS has not made progress in clarifying the roles and responsibilities for implementing certain contractor oversight responsibilities and, as of July 2009, CMS still had a backlog of contacts that were overdue for closeout, putting CMS at increased risk of not identifying or recovering improper payments or waste. The continuing weaknesses in contracting activities and limited progress in addressing known deficiencies will continue to put billions of taxpayer dollars at risk of improper payments or waste. |
USDA’s Food Guide Pyramid visually depicts federal guidance for the number of servings needed in each of five food groups to provide a healthy diet. (See fig. 1.) For example, the Pyramid recommends eating from 2 to 4 servings of fruit and 3 to 5 servings of vegetables daily. The specific number of servings of fruits and vegetables is based on nutrient requirements and energy needs, which are associated with gender, age, and activity level. (See table 1.) For example, most children and many teenagers and adults should consume 7 servings daily—4 of vegetables and 3 of fruit. All federal nutrition education and food assistance programs are required to promote the Dietary Guidelines for Americans. USDA uses the guidelines and the Food Guide Pyramid as the science base for nutrition education efforts in the food assistance programs. Table 2 provides information on participation, benefits, and nutrition education funding for the Food Stamp Program, WIC, and the School Lunch and Breakfast Programs, as well as the WIC and seniors farmers’ market programs. USDA’s food assistance programs serve one in six Americans each year. As table 2 shows, in terms of funding, the Food Stamp Program is by far the largest, having over $17.8 billion in funding in fiscal year 2001. Slightly over half of the recipients are children, and about 20 percent are elderly or disabled. In terms of participation, the School Lunch and Breakfast Programs serve the greatest number of people (27.5 million and 7.8 million, respectively)—and are available to nearly 48 million school children. As a result of school meals, participants in these programs consume greater amounts of several important nutrients, such as calcium. WIC has seven food packages—three for pregnant or postpartum women, two for children, and two for infants. Depending on the package, WIC benefits may be used to purchase cereals, 100 percent fruit (or vegetable) juice, eggs, milk, cheese, peanut butter, dried beans, infant formula, tuna, and carrots. WIC serves about half of all infants and a quarter of all children from 1 through 4 years old in the United States. WIC is considered one of the most successful nutrition interventions, increasing birth weights and providing other health benefits. Regarding nutrition education, USDA provides the lion’s share of federal funding, although HHS, DOD, and other federal agencies fund nutrition education efforts as well. For food assistance participants, USDA obligated about $398 million toward nutrition education in fiscal year 2001. Nutrition education for WIC participants accounted for about half of USDA’s total nutrition education funding. As table 2 shows, funding per participant for nutrition education varied greatly—from $32 per WIC participant to less than $0.27 per child in the school meal programs. In fiscal year 2001 CDC spent $16.2 million for nutrition, obesity, and physical activity efforts; the National Cancer Institute spent $3.6 million for 5 A Day initiatives; DOD spent $3.5 million for nutrition education activities for the services. In addition to food assistance and nutrition education programs, other federal programs and policies—such as trade restrictions and environmental regulations—have the potential for affecting the consumption of fruits and vegetables. For example, trade restrictions in the form of tariffs on some fruits and vegetables result in higher prices that could reduce U.S. consumption of those fruits and vegetables. Likewise, environmental regulations limiting pesticides use may increase farm costs, which can reduce the quantities of fruits and vegetables sent to market, thereby increasing price and lowering consumption. However, the effect of trade restrictions and environmental regulations on farm-level prices would have to be substantial to have a large impact on consumption, because farm-level prices account for about one-third of retail prices for fruits and vegetables. Appendix III provides more detail on how these programs and regulations can affect fruit and vegetable consumption. The food “environment” is a major factor that affects the consumption of fruits and vegetables. The food environment includes prices set by retailers; access to and availability (year-round or seasonal) in local groceries, markets, and restaurants; the quality of fresh produce; the time available for shopping, preparation, and eating; and the ready availability, appeal, and advertising for and prices of other foods. Taste preferences and familiarity with foods of a particular culture are other important factors in food choices. Heart disease, cancer, stroke, and diabetes are among the leading causes of death for Americans. In fact, medical experts, including the Surgeon General, have noted that physical inactivity and poor diet—of which low consumption of fruits and vegetables is a key component—cause diseases that result in the death of more than 300,000 Americans each year.Studying the relationship between diet and chronic diseases is challenging because, among other things, it is difficult to measure and account for all potential risk factors; this challenge is compounded because chronic diseases may take many years to develop. However, extensive and consistent evidence shows that diet is one of the leading risk factors for these diseases. Although no diet can guarantee full protection against any disease, the recommendations from HHS and USDA and health organizations such as the American Cancer Society and American Heart Association indicate that consuming the recommended 5 to 9 daily servings of fruits and vegetables as part of a healthy diet provides some of the best dietary protection against disease. Research links increased fruit and vegetable consumption to reduction in the risk of heart disease and many types of cancer. Research also suggests their potential benefits for reducing the risk of stroke, diabetes, diverticulosis, and obesity, according to reviews by NIH, CDC, and academic experts. Fruits and vegetables are among the most concentrated natural sources of over 100 beneficial vitamins, minerals, and other dietary compounds such as fiber and antioxidants important to disease prevention. With regard to the specific health benefits of fruits and vegetables, studies show that people who consume 5 or more servings daily have about one-half the cancer risk of those who consume 2 or fewer servings, according to a National Institutes of Health report. That report also stated that, according to studies, diets high in fruits and vegetables are associated with a 20 to 40 percent reduction in the occurrence of coronary heart disease. Appendix IV describes some of the evidence that links fruit and vegetable consumption to reducing the risk for heart disease and cancer as well as possible links to reducing the risk for stroke, diabetes, obesity, and diverticulosis. Because it is not clear how single nutrients, combinations of nutrients, the over consumption of nutrients, or age affect one’s risk of specific diseases such as cancer, experts advise consuming a variety of fruits and vegetables to ensure an adequate intake of all known and as yet unidentified dietary compounds. A variety of deeply colored fruits, such as apricots, blueberries, and citrus fruits, and dark green or orange vegetables, such as spinach and carrots, are particularly rich in vitamins, minerals, antioxidants, other phytochemicals, and fiber. Under current federal policy, guidance, and nutrition education programs, the consumption of fruits and vegetables by the general public as a whole has increased somewhat, yet most Americans consume fruits and vegetables below recommended levels. The most widely recognized nutrition guidance—the Food Guide Pyramid graphic—does not clearly convey some important nutrition guidance that could help Americans close this consumption gap, and USDA is currently assessing the Pyramid for possible updates. In recognition of the diet shortfall in fruits and vegetables, HHS’s strategic plan identifies 5 A Day as one strategy for achieving its objective of improving the diet of Americans. Moreover, the April 2002 announcement by HHS and USDA to expand 5 A Day may further encourage Americans to consume the recommended 5 to 9 daily servings. Under federal policy and guidance—Dietary Guidelines for Americans, Healthy People 2010, and the Food Guide Pyramid—the consumption of fruits and vegetables has improved somewhat. Between the 1989-91 and 1994-96 time frames, the most recent years for which consumption data are available, fruit and vegetable consumption each increased by 0.2 serving (or nearly half a serving in total), such that the average consumption of fruits and vegetables is near the minimum recommended 5 servings a day. In addition to federal policy and guidance, USDA’s Economic Research Service points to the increased year-round availability of fruits and vegetables as another factor influencing consumption. Nonetheless, most Americans still fall short of consuming the recommended levels for health promotion and disease prevention. As table 3 shows, even with the increase, only 23 percent of Americans get their recommended servings of fruits, and 41 percent, their recommended servings of vegetables. According to USDA and NIH researchers, consumption of the deeply colored fruits and deep green or orange vegetables falls far short of what is recommended for disease prevention. USDA’s data for dark green or orange vegetables indicates that consumption increased by less than one- tenth of a serving between the 1989-91 and 1994-96 surveys. Indeed, only about 8 percent of Americans get the recommended daily 1 or more servings of dark green or orange vegetables. Moreover, only 3 percent of Americans get both the recommended number of servings of vegetables and at least 1 serving daily of a dark green or orange vegetable. There may be many reasons why the consumption of fruits and vegetables, particularly deeply colored ones, is low. For example, many people may not be aware of the importance of eating deeply colored fruits and vegetables. In addition, taste, price, and seasonal availability, among other factors, may affect consumption, as might the ready availability of other foods. Many Americans do not incorporate adequate variety into their daily diet. As noted earlier, eating a wide variety of fruits and vegetables is important because different fruits and vegetables are rich in different nutrients. For example, fruits such as apricots and blueberries, are excellent sources of protective phytochemicals and, although most citrus is consumed as juice, a fresh orange has 27 times the fiber content of orange juice. As shown in figure 2, three fruit sources—citrus (fresh and juice), apples (fresh and juice), and bananas—accounted for 52 percent of total fruit servings in 1999. While these provide important nutrients, they do not supply all the nutrients important for disease prevention and health promotion. (See app. V for a detailed listing of fruits in each of the categories for fig. 2.) Americans’ vegetable consumption tells a similar story. Although federal dietary guidance recommends eating a variety of vegetables—including dark green or orange; starchy (e.g., potatoes, dry beans, peas, and lentils); and others—consumers eat a limited variety. As shown in figure 3, three foods—white potatoes, iceberg lettuce, and canned tomatoes—accounted for 53 percent of total vegetable servings in 1999. Although white potatoes are an excellent source of potassium and vitamin C and are naturally low in fat, frozen potatoes (mostly french fries) and potato chips together accounted for 43 percent of starchy vegetable servings and 17 percent of total vegetable servings. The added-fat in french fries and potato chips carries calories that contribute to overweight and obesity. Moreover, the consumption of dark green or orange vegetables most likely to prevent disease and promote health totaled only 0.4 serving per day, well below the 1 or more daily servings suggested for the average person. (See app. V for a detailed listing of vegetables in each of the categories for fig. 3.) USDA’s Food Guide Pyramid graphic—the most widely recognized nutrition guidance—does not communicate the need to consume a variety of fruits and vegetables, particularly the deeply colored ones that the Dietary Guidelines for Americans identifies as important for disease prevention and health promotion. (See fig. 1 on page 6.) USDA pointed out that the Pyramid graphic, introduced in 1992, was not intended to stand alone but, rather, it was to be used along with the information in the Pyramid brochure. However, the Pyramid graphic is typically displayed alone—on food packages and in classrooms, grocery stores, and cafeterias. The Pyramid brochure, which was modified slightly in 1996, provides selection tips for a variety of fruits and vegetables, including dark green or orange vegetables. According to HHS officials, a diet based on the Pyramid graphic provides adequate nutrient intake when people consume a variety of the recommended servings of fruits and vegetables, for example half of fruit servings from citrus, melons, or berries and one-third of vegetable servings from dark green or orange vegetables. Furthermore, a 2001 publication by the National Cancer Institute stated that the inadequacies and imbalances in the current American diet—such as the low consumption of dark green or orange vegetables—relate to issues that were integral to the development of the Food Guide Pyramid but not captured in the Pyramid graphic. Although the Pyramid graphic is based on analyses from more than 10 years ago, USDA’s Center for Nutrition Policy and Promotion reanalyzed it in the mid-1990s and determined that it was consistent with the 1995 Dietary Guidelines for Americans and met most nutrient objectives. The center recently initiated a reassessment of the Food Guide Pyramid to ensure that it is consistent with the 2000 Dietary Guidelines for Americans and new nutrient intake recommendations released by the National Academy of Sciences. The center plans to complete its assessment and any revisions before the 2005 update of the Dietary Guidelines for Americans. The 5 A Day program—a public-private partnership between federal/state/local governments, the fruit and vegetable industry, and supermarkets—is the only federal nutrition education and intervention effort focused on increasing fruit and vegetable consumption. Its long- range purpose is to help reduce the incidence of cancer and other chronic diseases through dietary improvements—specifically by getting Americans to consume 5 to 9 servings of fruits and vegetables daily. The National Cancer Institute coordinates and provides the funding for the federal side of the partnership, CDC develops and manages state-level programs, and the Produce for Better Health Foundation, a nonprofit organization of approximately 800 members of the fruit and vegetable industry and supermarkets, coordinates the private side of the partnership. To support a variety of 5 A Day program and research activities, the National Cancer Institute spent $3.6 million in fiscal year 2001. In fiscal year 2002, the Institute expects to spend $4.5 million; California, Florida, and Arizona have committed to providing $3 million, $1.7 million, and $0.3 million, respectively; and industry has committed about $3 million. The success of 5 A Day may be due to its use of a combination of strategies, including hands-on experiences (e.g., food preparation and field-trips), visual cues (signs on cafeteria doors and at registers), and media campaigns (TV, radio, and print). Following are three examples of 5 A Day community-based programs that have had sustained results in improving fruit and vegetable consumption: An 8-week program of activities for 4th- and 5th-grade children in three California communities increased the consumption of fruits and vegetables by over 1.5 servings after the first year and about 1 serving after the second year, compared with consumption by children who did not receive the activities. The program included classroom activities (lessons, problem-solving, and taste testing), cafeteria/food service activities (promotion of fruits and vegetables), and parent activities (homework assignments, brochures, refrigerator magnets). That program was subsequently expanded statewide. A 2-year study in 28 small- to medium-sized businesses in Seattle, Washington, provided half the sites with 5 A Day signs in the work environment to provide constant reminders about eating fruits and vegetables and worked with food-service staff to make more fruits and vegetables available as part of the regular menus. Nutrition education was provided through a specialist who visited the work sites, and an employee advisory board was used to encourage behavior change. Two years after the program concluded, the employees who received the 5 A Day program averaged about a third of a serving more of fruits and vegetables than the control group. A 20-month program involving 49 predominantly African-American churches in North Carolina resulted in a 0.85-serving increase in fruit and vegetable consumption at the 2-year follow-up. According to study participants, having more fruit and vegetables served at church functions, having the pastor promote the 5 A Day message from the pulpit, and receiving personalized printed materials were influential in increasing their consumption. The American Cancer Society has begun a nationwide program based on the design of this study. In November 2000, the NIH’s National Cancer Institute reported the results of an independent review of the science underlying the 5 A Day program, its implementation and accomplishments, and the degree to which its goals and objectives were achieved. The evaluation found that the evidence was convincing and that the program contributed to the small increases in fruit and vegetable consumption over the past decade. It recommended that the National Cancer Institute, among other things, increase resources, staffing, and expertise available to the states for the dissemination, monitoring, and evaluation of the 5 A Day program; expand 5 A Day by partnering with CDC to manage states’ 5 A Day programs and develop a surveillance plan to monitor fruit and vegetable consumption; and partner with USDA to better focus dietary guidelines and promote research in agriculture and economic policies. The evaluation further recommended that the National Cancer Institute partner with other NIH institutes to promote fruit and vegetable research. In response to those recommendations, in April 2002, the Secretaries of Agriculture and Health and Human Services announced plans to expand 5 A Day. A memorandum of understanding signed by (1) the Director, National Cancer Institute; (2) the Director, National Center for Chronic Disease Prevention and Health Promotion, CDC; and (3) USDA’s Under Secretaries for Food, Nutrition, and Consumer Services; Research, Education, and Economics; and Marketing and Regulatory Programs, formalized commitments to enhance and more effectively coordinate the national 5 A Day partnership. It also established a framework for cooperation between the agencies to promote 5 A Day, whereby each agency pledges its commitment to encourage all Americans to eat 5 to 9 servings of fruits and vegetables daily. To be successful, however, any crosscutting, multiagency efforts such as the new 5 A Day initiative depend on certain key elements, including clear leadership, an overarching strategy, and effective partnerships between the federal and state agencies. These are critical elements that underpin the Government Performance and Results Act of 1993, which provides agencies with a systematic approach for managing programs. The results act’s principles include developing a strategy, identifying goals and objectives, and establishing performance measures. The act states that performance goals should be sufficiently precise to allow for a determination of performance. When participants in a crosscutting program understand how their missions contribute to a common goal— such as achieving the Healthy People 2010 nutrition objectives—they can develop specific goals and objectives and implementation plans to reinforce each other’s efforts. We recognize the difficulties associated with making changes in dietary habits. However, we believe that if HHS and USDA work together in a comprehensive strategic approach, they are more likely to be successful. Since it released Healthy People 2010, HHS has identified 5 A Day in its strategic plan for 2001-2006 as one strategy for achieving its objective of improving the diet of Americans. With the NIH’s National Cancer Institute as the lead federal agency for the new 5 A Day initiative, a steering committee—composed of the National Cancer Institute, CDC, USDA, the American Cancer Society, Produce for Better Health Foundation, and others—was created to plan and collaborate on specific activities to achieve the 5 A Day goal. The memorandum of understanding specified activities for each agency and supported, among other things, comprehensive planning at the federal, state, and local levels, and improved the availability of high-quality data related to fruit and vegetable consumption. If translated into specific strategies and targets in agencies’ annual performance plans, these commitments could provide a framework to guide HHS’ efforts to help Americans achieve the Healthy People 2010 objectives for fruits and vegetables. Fruit and vegetable consumption by food stamp participants and women in WIC is similar to that of low-income individuals who do not participate in these programs; both low-income groups have lower consumption than the general public. However, children in WIC and participants in the school meal programs and farmers’ market programs have begun to show some improvement. In the April 2002 announcement by USDA and HHS to expand 5 A Day, USDA pledged to support 5 A Day in its food assistance programs. The 5 A Day commitments could provide a framework for incorporating the 2010 objectives in USDA’s strategic and performance plans. The key purposes of the food assistance programs are to reduce hunger, increase food security, and improve nutrition and health, while supporting American agriculture. Increasing fruit and vegetable consumption is not a primary focus, yet it is part of USDA’s nutrition education efforts under these programs. Nonetheless, the consumption of fruits and vegetables by food stamp participants and women in WIC differs little from that of similar nonparticipants, and consumption by children in the school meal programs is greater than that of nonparticipants, as shown in the right column of figure 4. In addition, according to data covering 1994-96 and 1998—different time frames from those presented in figure 4—children in WIC also have increased their consumption of fruits and vegetables. The limited information on farmers’ market participants—in WIC and in the seniors program—suggests that they too consume more fruits and vegetables than nonparticipants. Our analysis of fruit and vegetable consumption, food benefits, and related initiatives for the Food Stamp Program, WIC, the National School Lunch and Breakfast Programs, and the WIC and Seniors Farmers’ Market Nutrition Programs are as follows. The Food Stamp Program. Food stamp recipients receive on average less than $80 monthly to help them purchase foods of their choice. They consume about the same amounts of fruits and vegetables as similar low-income nonparticipants. According to USDA, program participants consumed 1.3 servings of fruits and 3.0 servings of vegetables as compared with 1.3 and 3.1 servings, respectively, for low- income nonparticipants. Both low-income groups fall short of the national average of 1.5 servings of fruits and 3.3 servings of vegetables, as well as the recommended number of servings—2-4 of fruits and 3-5 of vegetables. Furthermore, both participants and low-income nonparticipants consumed only 0.1 serving each of deep green or orange vegetables—those most important to disease prevention. According to USDA, the program’s electronic benefit payment system may discourage participants from shopping at farmers’ markets because many markets do not have the technology needed to access payments. In 2002, New York State received $100,000 in federal funds to support a pilot program in the state to implement wireless and other innovative electronic solutions that will allow farmers’ markets to accept food stamp and WIC benefits. WIC. As with food stamp participants, women in WIC have about the same low fruit and vegetable consumption as similar nonparticipants.For example, according to USDA data, a 30-year-old woman in WIC would consume 0.2 serving more fruit (1.3 versus 1.1) and the same number of servings of vegetables (3.5) as a similar nonparticipant, according to USDA estimates. Children ages 2 through 4 in WIC consume 0.3 more serving of fruit (1.4 versus 1.1) and 0.1 serving more of vegetables (1.2 versus 1.1). The five WIC packages for women and children designate allowable foods selected to improve nutrient intake. Only one WIC food package includes vegetables, and it provides only about one-quarter serving—all in carrots. In contrast, the five WIC packages for women and children provide far more servings of other food groups. For example, all five WIC packages for women and children provide from 3.2 to 4.1 of the recommended 3 to 5 daily servings of dairy products. Appendix VI shows the foods in the five WIC packages for women and children. The National School Lunch and Breakfast Programs. The millions of children who participate in School Lunch consume daily, on average, twice as many servings of vegetables as nonparticipants for lunch (1.3 servings versus 0.6); however, most of that difference is in the form of white potatoes—mostly french fries. With respect to fruit consumption, there is no difference between the participants of School Lunch and similar nonparticipants. The millions of children who participate in School Breakfast consume daily, on average, 0.4 more serving of fruit than nonparticipants for breakfast (0.7 serving versus 0.3). FNS officials pointed out that their most significant initiative to improve school meals (the School Meals Initiative) was begun in 1995 and that the most current consumption data (for 1994-96) would not capture potential increases in consumption that may have occurred after 1996. The School Meals Initiative included new nutrition standards for school meals and educational and technical resources to assist food service personnel in preparing nutritious and appealing meals. In 2001 USDA reported, from a survey of school food service authorities, that schools reported an increase in (1) the numbers of fruits and/or vegetables offered, (2) the purchases of fruits and vegetables, and (3) plate waste for cooked vegetables. However, it is unclear whether the increased overall purchases or offerings resulted in increased consumption because purchases or offerings per student or meal were not reported. Although participation in the School Lunch and Breakfast Programs has been shown to improve dietary quality, 40 percent of children do not eat the School Lunch and 83 percent do not eat School Breakfast in schools where the meals are offered. USDA reported that participation might be affected by other meal options available to students such as foods sold a la carte, in vending machines, and school stores or snack bars. Those foods do not have to meet the nutrition standards required for the USDA-reimbursable meals of School Lunch and Breakfast. USDA reported that a la carte sales are higher in higher-income schools and that as a la carte sales increase, school meal participation decreases. WIC Farmers’ Market Nutrition Program. A USDA-funded evaluation of a 1991 pilot program that became the WIC Farmers’ Market program found, from a survey, that participants daily consumed an average of 0.2 serving more of both fresh fruits and vegetables than low-income nonparticipants. According to the survey, which was based on a random statistical sample, participants consumed an average of 3.6 servings of fruit and 4.1 servings of vegetables daily, compared with 3.4 and 3.9 servings, respectively, for nonparticipants. In surveys developed by the National Farmers’ Market Association and conducted by states annually from 1996 to 2000, most of the over 20,000 participants surveyed reported that they had increased their consumption of fresh fruits and vegetables. In the 2000 survey, 71 percent reported that they ate more fresh fruits and vegetables than usual and 80 percent reported that they planned to eat more year round. Surveys for each of the previous 4 years had similar results.Over 2 million WIC participants receive farmers’ market coupons; only about 60 percent redeem them, annually. According to FNS, eligible individuals may not participate because, among other things, farmers’ markets are often not located in or near low-income areas and they may not be familiar with farmers’ markets or with the program. The Seniors Farmers’ Market Nutrition Program. Established in 2001, the seniors program serves about 380,000 low-income elderly people. This small program is based on the WIC farmers’ market program and the two farmers’ market programs are often run by the same state and local agencies. Although national data are not available, one county in Washington State surveyed 87 homebound participants, who reported increasing their consumption of fruits and vegetables by one serving daily. Under the school meal programs, USDA has increased its spending for canned, frozen, and fresh fruits and vegetables from $140 million in 1996 to $243 million in 2001. In addition, under an agreement between USDA and DOD, DOD Fresh provides about $31 million in fresh produce to schools in 39 states, Puerto Rico, and Guam. Participating schools benefit from DOD’s purchasing power and distribution network. According to the American School Food Service Association, DOD Fresh generally provides lower-priced, more-varied, and better-quality fruits and vegetables than are otherwise available to schools and may result in increased consumption. Dark green or orange vegetables are among the 10 most popular items ordered by schools through DOD Fresh. In 1997 we identified states’ use of DOD Fresh as a best practice for improving the nutritional content of school meals. Because DOD uses a decentralized system to purchase produce from local vendors, we reported that it was able to provide high- quality fresh produce. According to USDA, many school food authorities can purchase fresh produce of similarly high-quality from distributors and, seasonally, from local sources. Nonetheless, fruits and vegetables are the most frequently wasted (e.g., thrown away) food items in the School Lunch Program, ranging from 21- percent to 42-percent waste. In 2002 USDA reported that wasted food— including fruits and vegetables—cost USDA $600 million annually.Studies have shown that this waste can be decreased by such factors as (1) scheduling recess before lunch, (2) increasing the use of fresh produce and local foods, (3) involving students in meal planning, (4) introducing new fruits and vegetables to students before they appear in school meals, and (5) allowing students to serve themselves—for example, using self- service salad/meal bars. Over the past few years, FNS has helped fund some innovative programs implemented by local and state agencies that have been found to promote fruit and vegetable consumption. One such program—Food Sense—is under way in two counties in Washington State. Food Sense provides 1,300 low-income adults with handouts, recipes, and information about the Food Guide Pyramid and the Dietary Guidelines for Americans, and how to select and prepare fresh produce. The program also serves 2,000 children from low-income households and uses storytelling, games, and healthy snack tasting to reinforce healthy food choices. State data on the program show that 75 percent of the adults and 31 percent of the children participating in Food Sense ate more fruits and vegetables daily as a result of the program. Another initiative, funded jointly by USDA and three California school districts, also appears to have increased the consumption of fruits and vegetables in the School Lunch and School Breakfast Programs. That effort provided salad bars as an alternative to the standard hot meal, along with hands-on nutrition education in the classroom, contests, and tours of local farms and farmers’ markets. Students who consumed the salad bar and were surveyed, reported increasing their daily consumption of fresh fruits and vegetables by an average of 1 to 2 servings. To meet USDA requirements for meal reimbursement, the schools’ salad bars offered items from each of the five food groups (grain, milk, protein, fruit, and vegetables), and students were required to take a minimum of one serving from three groups. In addition to the FNS efforts, USDA’s Cooperative State Research, Education and Extension Service administers a nutrition education effort—the Expanded Food and Nutrition Education Program—to improve the dietary habits of low-income children and families with children. In 2001 about 447,000 children and 164,000 adults were in the program, which was funded at $58.6 million. At a cost of about $96 per participant, the program includes a series of lessons taught by peer instructors over several months and uses hands-on approaches, such as cooking new recipes, that enable participants to gain the practical skills necessary to make positive behavioral change. Nearly three quarters of the participants entering the program in 2001 were receiving federal food assistance; an additional 8 percent were on food assistance by the time they left the program. For about a decade, the program has tracked participants’ consumption levels for each food group when participants enter and exit the program. The 106,000 adult participants who completed the program in fiscal year 2001 increased their fruit and vegetable consumption by more than 62 percent (from 2.9 to 4.7 servings), according to program evaluation data. According to USDA, evaluation data from participants has led to improvements in curriculum, staff development and training, new community partnerships, and increased cooperation with researchers who are studying diet-related behavior. In commenting on a draft of this report, USDA pointed out that its efforts to promote fruits and vegetables extend beyond food assistance and nutrition education to include agricultural, economic, and behavioral research; agricultural extension; and market development and support. For example, USDA sponsors research investigating the health-promoting properties of fruits and vegetables, as well as the motivations and barriers to their consumption. USDA’s strategic and performance plans do not specifically address the Healthy People 2010 objectives for increasing the proportion of Americans in food assistance programs who consume at least 2 servings of fruit and 3 of vegetables daily. USDA’s strategic plan for 2000 to 2005 has a goal and related targets for improving the overall diet of food assistance program participants through nutrition education and education-related research. To help achieve that goal, USDA recently launched the national EAT SMART. PLAY HARD.TM nutrition education and promotion campaign to convey motivational messages regarding nutrition and physical activity aimed at changing dietary behavior, among other things. In addition, USDA’s 2003 revision to its 2002 performance plan includes a goal of improving access to fruits and vegetables, which it will measure by the funding provided to purchase fruits and vegetables for schools, and by the number of sites on Indian reservations receiving fresh fruits and vegetables. According to an FNS planning official, the strategic and performance plans include a focus on overall dietary quality. Although these plans do not specifically address the Healthy People 2010 objectives related to fruits and vegetables, officials told us that USDA is working to increase fruit and vegetable consumption through improvements in nutrition education efforts, such as the EAT SMART. PLAY HARD.TM campaign. However, USDA has reported for several years that limited funding hinders nutrition education efforts and evaluation of those efforts. The current strategic plan also acknowledges that inadequate funding for nutrition education is the key factor that could hinder plans to improve Americans’ diet, including the diet of food assistance program participants. With regard to evaluating these efforts, a 1996 report by USDA’s Center for Nutrition Policy and Promotion cited inadequate funding as a major factor limiting the evaluation of USDA’s nutrition education efforts. In 1999 FNS reported that the evaluation system was fragmented and minimal, and lacked outcome measures. Agriculture appropriations acts for the past several years have generally prohibited the use of food stamp, WIC, and school meal program funds for such evaluations. Those funding constraints notwithstanding, in the April 2002 memorandum of understanding between USDA and HHS regarding 5 A Day, USDA pledged to plan and support the delivery of the 5 A Day message to food assistance participants. USDA has not yet identified what additional resources, if any, will be needed to accomplish this effort; however, the Secretary of Agriculture pledged to commit the necessary resources to meet the 5 A Day goals. As noted earlier, the strategic planning process is designed to address crosscutting, multiagency/government issues such as the expanded 5 A Day initiatives in the memo of understanding. In fact, the 5 A Day commitments could provide a framework for USDA to incorporate the 2010 objectives in strategic and performance plans. Identifying strategies and targets could help USDA implement its 5 A Day commitments, which in turn could help food assistance participants achieve national Healthy People 2010 objectives for fruits and vegetables. Federal officials, academic nutrition experts, and representatives from industry, food advocacy, and consumer groups have identified a number of actions they believe the federal government could take to help people make better dietary choices and increase their consumption of fruits and vegetables. The actions most frequently identified include (1) expanding and improving federal nutrition education programs, (2) emphasizing the importance of program evaluation and related behavioral research efforts to maximize the impact of nutrition education, (3) providing incentives to encourage food stamp recipients to purchase fruits and vegetables, (4) including more fruits and vegetables in WIC packages, (5) expanding the availability of salad bars and access to DOD Fresh in schools, and (6) expanding farmers’ market programs for WIC participants and low-income seniors. Expand and improve federal nutrition education programs. To promote better nutrition, particularly fruit and vegetable consumption, USDA and HHS officials, university professors, and the Center for Science in the Public Interest emphasized to us the value of nutrition education programs for the general public and the need to expand federal investment in programs shown to be effective at changing dietary behavior. Several contrasted the food industry’s annual advertising expenditures (about $11 billion, annually) to the federal government’s nutrition education expenditures (under $500 million, annually). According to FNS, it is unrealistic to expect consumer behavior to be consistent with national nutrition goals in an environment that provides a barrage of messages that encourage poor nutrition. The 1999 FNS report to Congress stated that an ongoing investment reinforcing the importance of nutrition education is essential to counter the environment that has been moving the general population toward poorer nutrition. The report further stated that adequate nutrition education for the general population—not just FNS program participants—is an essential component of an overall strategy because FNS program participants are influenced by trends that affect the population as a whole. In addition, the Surgeon General recommended a national campaign to foster public awareness of the benefits of healthful dietary choices and physical activity. With regard to food assistance programs, the Surgeon General recommended expanding these nutrition education efforts as well. Likewise, the 1999 FNS report to Congress noted that nutrition education should be an integral benefit of all FNS programs. According to FNS, dependable funding for nutrition education is needed to support planning, program delivery, and the integration of services, and for the federal government to provide the necessary leadership and support. FNS also reported that the child nutrition programs—the largest of which are the School Lunch and Breakfast Programs—serve more participants, and yet have less funding per participant for nutrition education, than any other FNS program. The report noted that uneven nutrition education funding and reductions in program funding have decreased the capacity of state and local agencies to effectively deliver nutrition education to children. Similarly, CDC noted the need to improve the capacity of state programs that promote healthy behaviors to reduce the risk of chronic disease. CDC’s long-term goal is to establish a nationwide network of state-based comprehensive nutrition and physical activity programs for the prevention and control of obesity and related chronic diseases, and to include 5 A Day activities as an essential component of every state’s nutrition education effort. Emphasize the importance of evaluating programs and conducting behavioral research to maximize the impact of nutrition education. In the 1999 report to Congress, FNS pointed out the need to invest in improved nutrition education evaluation. FNS officials and others agreed that adequate and reliable evaluations are needed to determine which program components improve diet so that FNS and state and local agencies administering the programs can plan effective nutrition education strategies. In a 2001 report on WIC, we recommended that FNS work with stakeholders to develop a strategic plan to evaluate the impacts of specific WIC nutrition services and include in the plan information on the types of research that could be done to evaluate the impacts of specific nutrition services as well as the data and the financial resources that would be needed to conduct such research. In addition to program evaluation, a number of experts, including the 2000 Dietary Guidelines Advisory Committee and the Surgeon General, recommended increasing behavioral research efforts to gain a greater understanding of what motivates people to make healthy food choices. Research findings could be used to improve the design of new, or make improvements in existing, nutrition education efforts. Officials from CDC also noted the need for more fruit- and vegetable-related economic, marketing, and consumer research, as well as research to clarify the roles of nutrients and other compounds found in fruits and vegetables that may be beneficial to health. Provide incentives to encourage food stamp recipients to purchase more fruits and vegetables. Academic nutrition experts and officials from the American Public Health Association, the state of California, food advocacy groups, the Center for Science in the Public Interest, and the Produce for Better Health Foundation suggested the use of incentives, such as double coupons or discounts, to encourage food stamp recipients to purchase fruits and vegetables. California’s Department of Health Services, in conjunction with three grocery chains that are 5 A Day partners, has proposed a pilot project to FNS to see how food stamp recipients respond to three different approaches for providing incentives to buy fruit and vegetables. These include coupons at checkout for free or discounted items, “buy one, get one free” promotions, and store discount cards that can be used with electronic benefit transfer cards. For example, stores have agreed to allow recipients to “buy one, get one free” for certain fruits and vegetables. The stores will feature different fruits and vegetables for which local food stamp participants have expressed preferences, and community partners will promote the program as part of a nutrition message. The grocery receipt would show the value of the free item as an additional incentive. The stores and the other industry partners in the pilot have agreed to pay for the free fruits and vegetables. Include more fruits and vegetables in WIC packages. The National WIC Association (formerly the National Association of WIC Directors), several health associations, and others have suggested that USDA broaden the WIC food package to include fruits and vegetables, because the current packages are inconsistent with the Dietary Guidelines for Americans. In a November 2000 letter to the Secretary of Agriculture, the American Cancer Society, American College of Preventive Medicine, other health associations, and industry groups urged USDA to broaden the WIC food package to include a variety of fresh produce. In addition, the National WIC Association, in a 2000 position paper, recommended that WIC packages offer a proportional balance of foods from each group in the Food Guide Pyramid. The association further stated that WIC should offer fresh, frozen, or canned fruits and vegetables, such as citrus fruits, tomatoes, sweet potatoes, greens, and broccoli. USDA recognizes that revisions to the food packages are vital to bringing the WIC packages in line with current scientific dietary recommendations—the Dietary Guidelines for Americans. FNS had planned to publish a proposed rule in The Federal Register in September 2000 that would add nutrient-dense leafy and other dark green or orange vegetables to five food packages, allow the substitution of canned legumes for dry legumes in four food packages, and reduce the servings of juice for three food packages. The final rule was to be effective September 2002. However, USDA decided to delay proposing the rule until after the 2000 election, because of concerns over potential opposition by industries that may lose revenue as a result of changes in the packages. The rule has yet to be published. Expand the availability of salad bars and DOD Fresh in schools. Federal and association officials in the 5 A Day partnership have called for the introduction of salad bars as part of all school meal programs. USDA has funded DOD Fresh at about $31 million per year for schools. Schools and state agencies have asked USDA to increase funding to make DOD Fresh available to more schools. Increased funding for DOD Fresh could also help provide produce for school salad bars. In addition, USDA, the American Academy of Family Physicians, American Academy of Pediatrics, American Dietetic Association, National Hispanic Medical Association, and the National Medical Association have suggested that (1) foods that compete with school lunches be required to follow the same nutritional regulations as the school meals; (2) all students have designated lunch periods as near the middle of the school day as possible and of sufficient length to enjoy eating healthy foods with friends; and (3) schools provide enough serving areas to ensure student access to school meals with a minimum of wait time, among other things. Also, the Farm Security and Rural Investment Act of 2002 includes a provision to pilot a program to offer free fruits and vegetables to school children in about 100 schools—25 schools in each of four states—and on an Indian reservation. This same act also includes a provision for mandatory spending on fruits and vegetables. The provision directs USDA to spend not less than $50 million for fresh fruits and vegetables for schools through DOD Fresh. Expand farmers’ market programs for WIC participants and the elderly. The American Public Health Association, the National WIC Association, the Center for Science in the Public Interest, and others support increasing farmers’ market programs for WIC and elderly participants. In addition, a few representatives noted that USDA agencies should collaborate to increase the number of markets in areas where WIC participants and other low-income people live. Such collaboration is needed because initiating farmers’ markets in low- income areas is hindered by several factors including difficulties with finding space in a large city and farmers’ lack of familiarity with the redemption of WIC coupons, according to one university researcher. He further stated that USDA’s Agricultural Marketing Service, which is responsible for promoting farmers’ markets, could work with extension agents as well as public service agencies to find suitable locations for these markets. In addition, some experts noted that most farmers’ markets do not have the technology, electricity, and telephone equipment needed to process the magnetic electronic benefit cards provided to food stamp recipients. Some of these efforts—such as expanding nutrition education—would require additional federal resources, while others—such as emphasizing an evaluation of nutrition education efforts, expanding DOD Fresh, promoting salad bars in schools, and expanding farmers’ markets for WIC and the elderly—may only require redirecting existing federal resources. Adding more choices of fruits and/or vegetables to the WIC package without reducing other food items would require additional funding. However, if USDA reduces the servings of other foods, then additional resources may not be required, but industries that stand to lose revenue are likely to oppose the proposal. Although federal nutrition policy and guidance—the Dietary Guidelines for Americans and the Healthy People 2010 nutrition objectives— recognize the importance of consuming a variety of fruits and vegetables as part of a healthy diet, the Food Guide Pyramid graphic does not convey this important guidance. The Pyramid graphic—the most widely recognized and disseminated nutrition guide—does not direct Americans to the best dietary choices, particularly to choosing a variety fruits and vegetables high in nutrients that science has linked to promoting health and reducing the risk for chronic diseases. While we recognize that USDA intended the Pyramid graphic to be used in the context of the information in the Pyramid brochure, in reality, the Pyramid graphic typically stands alone. Moreover, when HHS issued Healthy People 2010 with specific national objectives for fruit and vegetable consumption, it did so with the expectation that those objectives would be reflected in federal programs. However, USDA’s strategic plan does not specifically address how it will help food assistance program participants achieve the objectives—despite the fact that one in six Americans is on food assistance and half of American babies are on WIC. The memorandum of understanding between HHS and USDA provides a framework for expanding 5 A Day across all food assistance programs and to all Americans. Incorporating the new 5 A Day commitments into the agencies’ performance plans and establishing performance measures is a logical next step. We recognize the difficulty involved in changing Americans’ dietary habits. However, if USDA and HHS develop strategies with specific targets and milestones emphasizing the importance of fruits and vegetables—particularly deeply colored fruits and vegetables—in all their programs, they are more likely to be successful in influencing Americans to follow healthier diets. Finally, nutrition experts and others have identified a number of actions— such as increasing salad bars in schools and expanding farmers’ market programs—that hold promise for improving fruit and vegetable consumption for Americans and may warrant further evaluation. To give Americans the most current, science-based guidance for making dietary choices, we recommend that, as USDA considers revisions to the Food Guide Pyramid, the Secretary of Agriculture, in consultation with the Secretary of Health and Human Services, ensure that the Pyramid graphic communicates information on the need for a variety of fruits and vegetables, especially deeply colored fruits and vegetables, in accordance with the Dietary Guidelines for Americans and in support of the Healthy People 2010 objective for vegetables. To ensure that federal nutrition education/intervention and food assistance programs promote federal goals and guidelines on the consumption of fruits and vegetables, we further recommend that the Secretary of Agriculture include in the department’s strategic and performance plans, strategies and targets supporting the Healthy People 2010 objectives for fruit and vegetable consumption, the Secretary of Health and Human Services direct the National Institutes of Health, CDC, and other relevant agencies to include in their performance plans, strategies and targets for supporting the Healthy People 2010 objectives for fruit and vegetable consumption, and the Secretaries of Agriculture and Health and Human Services consider the actions that experts and others have identified to increase the consumption of fruits and vegetables and, for those deemed most promising, assess the merits, feasibility, and costs to determine whether the actions should be implemented. Lastly, to provide accountability for implementing the commitments in the April 2002 memorandum of understanding for 5 A Day, we recommend that the Secretaries of Agriculture and Health and Human Services, in their strategic and performance plans, develop specific strategies and targets for implementing the 5 A Day commitments made in the April 2002 memorandum of understanding. We provided USDA and HHS with a draft of this report for their review and comment. We obtained USDA’s comments in a meeting with department officials; HHS provided written comments. (See app. VII.) USDA said it understood the basis for our recommendations and generally concurred with our treatment of the issues relating to its programs. USDA said that the science base for its nutrition efforts is the Dietary Guidelines for Americans, which includes the Food Guide Pyramid. USDA also said that its approach is to address total diet by promoting overall balanced nutrition and the principles of the Dietary Guidelines for Americans, which includes, but is not limited to, increasing fruit and vegetable consumption and other dietary improvements that support all of the Healthy People 2010 nutrition objectives and the 5 A Day targets. USDA said it has not considered the Healthy People 2010 nutrition objectives as goals that must be directly incorporated into its strategic plan, but that it could consider the merits of including components directly supporting the Healthy People 2010 objectives and 5 A Day in its strategic and performance plans. We believe the Healthy People 2010 objectives—which USDA was a major participant in developing—present a national agenda for improving the health of Americans and that USDA should incorporate these measurable targets and time frames into the food assistance programs. We further believe that the 5 A Day partnership presents a reasonable framework for addressing the objectives related to increasing fruit and vegetable consumption. With regard to the Food Guide Pyramid graphic, USDA noted that consumer research showed that the graphic communicates the messages of proportionality, moderation, and variety among the food groups. USDA stated that the Pyramid graphic does not convey all nutrition messages and that the Pyramid booklet provides more detailed information on how to make appropriate fruit and vegetable selections. Nonetheless, USDA agreed with the merits of our recommendation that, as it considers revising the Pyramid graphic and related materials, it will explore appropriate ways to effectively communicate information on the need for a variety of fruits and vegetables, especially deeply colored ones. USDA also pointed out that its efforts to promote fruits and vegetables extend beyond food assistance and nutrition education to include agricultural, economic, and behavioral research; agricultural extension; and market development and support. We added this information in the report. HHS stated that our draft report was too definitive in its statements about the relationship between fruit and vegetable consumption and the reduction of diseases, and that the scientific evidence does not support the statements we made concerning the reduction of disease rates and the dollar savings that would result from increased intakes of fruits and vegetables alone. HHS was particularly concerned about the information we present in appendix IV—which describes some of the evidence that links fruit and vegetable consumption to reducing the risk for certain diseases. We qualified the language in appendix IV to clarify the strength of the linkages between consuming fruits and vegetables and specific diseases and added citations for our sources throughout the report. With regard to the cost data, we deleted the reference to USDA’s estimate of total diet-related costs for heart disease, cancer, stroke, and diabetes because it was not limited to costs solely for low fruit and vegetable consumption. HHS further stated, “There is no comprehensive recent detailed review by a recognized authoritative body from which such a summary of the evidence could be based that would reflect the totality of recent evidence and that has undergone appropriate clearance.” Our sources for information included September 2001 and November 2000 National Institutes of Health reports summarizing the evidence on the relationship between fruits and vegetables and disease prevention; The Surgeon General’s Call to Action to Prevent and Decrease Overweight and Obesity: 2001; articles published in the Journal of the American Medical Association, the New England Journal of Medicine, and the Annals of Internal Medicine; and documents from CDC; the National Institutes of Health; the National Cancer Institute; and other HHS offices. The reports, documents, and information from HHS agencies were represented to us as reflecting current research; HHS did not identify evidence that we should cite that might be contradictory to any links between fruit and vegetable consumption and prevention of diseases cited in the report. HHS further stated that dietary messages for the consumer can be confusing, particularly when the public receives conflicting reports or isolated parts of the total diet message. While our report focuses on fruits and vegetables, it does so in the context of a healthy diet. With regard to our recommendation—to include in agencies’ annual performance plans, strategies and targets for supporting the Healthy People 2010 objectives for fruits and vegetables—HHS noted that performance measures based on relevant Healthy People 2010 objectives are already included in the strategic plans of many HHS agencies. However, the annual performance plans of the National Institutes of Health and CDC do not address the objectives on fruits and vegetables. We recommend that they do so. USDA and HHS also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to the Secretaries of Agriculture, Defense, and Health and Human Services; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at the following address: http://www.gao.gov. If you have any questions about this report, please contact me or Erin Lansburgh at (202) 512-3841. Key contributors to this report are listed in appendix VIII. The largest federal activities related to fruit and vegetable consumption are the purchase of fruits and vegetables and nutrition education. The U.S. Department of Agriculture (USDA) estimated that it obligated $6.7 billion for these activities in fiscal year 2001. In arriving at this figure, USDA estimated that 20 percent of the total food stamp and school meal expenditures were for the purchase of fruits and vegetables. Funding information for the Department of Health and Human Services (HHS) includes funding for the Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) only, and reflects 5 A Day activities carried out by the agencies, as well as CDC grants to states for 5 A Day efforts and other diet-related efforts. Department of Defense (DOD) funding goes primarily to purchase fruits and vegetables for military personnel and to support 5 A Day initiatives in each military service branch. USDA, HHS, and DOD nutrition education and/or intervention, human nutrition research, and “other” activities include activities related to fruits and vegetables, as well as those focused on general nutrition or other diet-related issues, because agencies generally do not track nutrition education/intervention funding related to fruits and vegetables separately from funding related to other foods. Funding information provided by USDA, HHS, and DOD are presented below in 2001 dollars in tables 4, 5, and 6, respectively. To identify the health-related benefits associated with the consumption of the recommended servings of fruits and vegetables, we reviewed research studies and/or obtained expert views from medical and nutritional scientists at USDA’s Center for Nutrition Policy and Promotion; HHS’s National Institutes of Health, including the National Cancer Institute and the National Heart, Lung, and Blood Institute, and CDC; the National Academy of Sciences; and academic research institutions, including Harvard University, the Mayo Clinic, Cornell University, New York University, and the University of California at Davis. We also considered federal dietary guidance and the two health claims for fruits and vegetables authorized by the Food and Drug Administration. We also analyzed published data on the costs to the nation, including deaths, associated with poor diets, including diets low in fruits and vegetables, from these agencies and experts. To assess whether the general public has improved its consumption of fruits and vegetables under key federal nutrition policy, guidance, and programs, we interviewed officials with and analyzed documents from (1) HHS’s Office of the Secretary, including the Surgeon General, the Food and Drug Administration and its Center for Food Safety and Nutrition; (2) USDA’s Agricultural Marketing Service; Center for Nutrition Policy and Promotion; Economic Research Service; and Cooperative State Research, Education and Extension Service; (3) DOD’s Office of the Secretary and representatives with nutrition responsibilities from each of the armed services; (4) industry groups, including United Fresh Fruit and Vegetable Association; and (5) consumer and health associations, including the Center for Science in the Public Interest; the American Public Health Association; the American Cancer Society; and the American Heart Association. We also interviewed former USDA and HHS officials with responsibilities for nutrition policies and programs to obtain historical information on certain programs. To assess whether key federal food assistance programs have achieved improvements in the fruit and vegetable consumption of program participants, we analyzed data on consumption for program participants and similar nonparticipants; documents related to the requirements for providing fruits and vegetables in food assistance programs; and internal and external program evaluations of food assistance and nutrition programs. We discussed this information with the officials previously identified, California’s Department of Health Services, the Food Research and Action Center, and the National WIC Association. To identify federal actions that experts recommend for increasing the consumption of fruits and vegetables, as well as the implications of those actions, we analyzed documents and interviewed officials from the aforementioned federal and state agencies, universities, and consumer and health associations. Documents included reports to Congress and the Secretaries of Agriculture and Health and Human Services, published articles, internal and external program evaluations, and position papers. To determine funding information for federal programs that may promote fruit and vegetable consumption, we developed a data collection instrument to identify federal obligations for efforts to promote fruit and vegetable consumption for a 5-year period. We requested from USDA, HHS, and DOD information on purchases of fruits and vegetables, human nutrition research, nutrition education, and other activities. In some instances, agency officials estimated funding information when precise information was not available. With regard to information on the numbers of servings of fruits and vegetables, we used consumption data that USDA calculates by surveying a sample of Americans in its “Continuing Survey of Food Intakes by Individuals” (CSFII). The most recent CSFII data are based on 1994-96 surveys. USDA also estimates food available for consumption by adjusting annual food supply data for spoilage, waste, and other losses accumulated throughout the marketing system. USDA’s Economic Research Service reports these data in terms of Food Guide Pyramid servings. While these data are more current, food supply data may overestimate actual consumption, according to USDA. Therefore, unless otherwise stated, we used CSFII data throughout this report. Finally, because we are not a scientific body, we did not conduct an independent study of the health benefits of various foods; rather, we reviewed existing literature and are reporting information contained in that literature. Therefore, nothing in this report would constitute an authoritative statement that could be used, under section 403(r)(2) of the Federal Food, Drug and Cosmetic Act, to support a claim of a health benefit of any food; nor would anything in this report constitute valid support for a petition under section 403(r)(4) of the act to allow such a claim to be made. Our work was conducted in accordance with generally accepted government auditing standards from August 2001 through June 2002. The federal government influences the consumption of fruits and vegetables in many ways besides through food assistance and nutrition programs. Any government program or regulation that affects either consumers’ preferences for consuming fruits and vegetables or producers’ ability or willingness to supply fruits and vegetables to the market can influence U.S. consumption, although these effects may be small. Such programs and regulations include trade-restriction and export-promotion programs, environmental regulations, and agricultural programs. However, the effects on farm-level prices of such programs or regulations would have to be fairly substantial to have a large impact on consumption, because farm-level prices generally account for about one-third of the retail prices for fruits and vegetables. Trade restrictions, in the form of tariffs, on some fruits and vegetables result in higher prices that could reduce U.S. consumption of those fruits and vegetables. Although tariffs on most fruits and vegetables are low— less than 10 percent of the price—tariffs of 20 percent or more are sometimes applied to some imported fruits and vegetables, such as certain types of melons, asparagus, and broccoli. In addition, when domestic marketing orders are in place, some imports, including tomatoes, potatoes, and grapes, are subject to minimum quality requirements. Some foreign suppliers and others claim that these requirements keep out lower-priced imports to maintain higher prices for domestic producers, which can reduce consumption. Proponents of the standards claim that the standards ensure high quality, which encourages consumption. The U.S. government also promotes exports of U.S.-grown fruits and vegetables to increase the demand for these products in other countries. Increasing exports through such promotions may divert fruits and vegetables from domestic markets, raising their price and lowering fruit and vegetable consumption by U.S. consumers. Environmental regulations regarding the use of pesticides, as well as the protection of water and air quality, can affect fruit and vegetable consumption in a variety of ways. For example, complying with regulations on pesticides may increase farm costs, which can reduce the quantities of fruits and vegetables supplied to the market, thereby increasing prices and lowering consumption. On the other hand, pesticide regulation may reduce consumers’ concerns about the safety of pesticides and other chemicals used on fruits and vegetables. To the extent that these concerns decrease, consumption may increase, particularly the consumption of those fruits and vegetables often eaten fresh, such as apples and broccoli. Similarly, consumers’ perceptions that lax environmental controls in other countries make imported produce less safe may affect the consumption of fruits and vegetables. Several agricultural programs administered by USDA can affect U.S. consumption of fruits and vegetables. These programs include (1) marketing orders, which advertise and promote certain crops and, in some instances, limit the quantity or specify the quality that can be marketed; (2) commodity programs, in which USDA provides farmers with price and income supports; and (3) crop insurance, through which USDA indemnifies farmers, in part or in whole, against the loss of certain crops. Marketing Orders. Marketing orders are agreements among producers of a particular commodity on actions designed to provide an “orderly market” that would reduce fluctuations in farm and retail prices and assure consumers of a steady supply of quality products. In general, the promotional activities of marketing orders can increase consumer demand and, therefore, the consumption of some fruits and vegetables. However, a few marketing orders restrict the quantity of a particular fruit or vegetable that can be marketed, particularly during periods of oversupply. In general, these quantity restrictions could be expected to raise prices compared with free market levels and, thereby, reduce consumption. Also, some marketing orders impose quality restrictions. While some economists have suggested that these orders might also be used to restrict quantity—by increasing the quality requirements during periods of strong supply—other economists have noted that such action might not affect supply because major purchasers (wholesalers and retailers) set quality requirements higher than those imposed by federal marketing orders. Commodity Programs. The provisions of commodity programs that provide producers of other commodities, such as grains and cotton, with income and price supports can affect fruit and vegetable consumption. For example, during the past several years, grain and cotton farmers who received direct payments from USDA under production flexibility contracts were restricted from increasing their acreage devoted to fruits and vegetables if they wanted to remain eligible for these payments. This restriction may have reduced the supply of certain fruits and vegetables, thereby raising prices and lowering the consumption of those fruits and vegetables. In addition, because consumers have to choose how to spend their food dollars— on fruits and vegetables or on other foods—anything that influences the consumption of other foods by influencing their prices, including government income and price support programs, can also affect fruit and vegetable consumption. Crop Insurance. By providing subsidized crop insurance for certain crops, USDA reduces the risk to farmers of growing those crops— which would generally lead to greater supplies and lower prices for those crops. Not all fruits and vegetables are covered under federal crop insurance. To the extent that the availability of crop insurance affects the supply of certain fruits and vegetables, it may also affect their prices and, hence, their consumption. Other federal activities may also affect the prices of fruits and vegetables and their consumption. Restrictions on legally importing seasonal workers when domestic workers are not available can reduce the amount of fruits and vegetables that farmers can harvest, which can result in higher prices, which, in turn, can reduce consumption. Laws that subsidize the cost of the water that some farmers use for irrigation can lower those farmers’ costs of growing fruits and vegetables. To the extent that the lower cost results in lower prices to consumers, consumption may increase. Federal efforts to ensure food safety can increase consumer confidence in the safety of fruits and vegetables and, perhaps, increase the quantities consumed. The following describes some of the evidence that links fruit and vegetable consumption to reducing the risk for heart disease and cancer. It also discusses the evidence suggesting links to reducing the risk for stroke, diabetes, obesity, and diverticulosis. Studying the relationship between diet and chronic diseases is challenging for many reasons, including the difficulty in accounting for all potential risk factors and the fact that chronic diseases may develop over a long period of time. Heart Disease. Heart disease is the leading cause of death in the United States, killing about 725,000 people each year, according to CDC. A healthy lifestyle, including a healthy diet, has great potential to reduce disease and death associated with coronary heart disease—the manifestation of heart disease that afflicts the heart’s blood vessels. A diet low in saturated fat and cholesterol and rich in fruits, vegetables, and grains has been found to be associated with lower rates of coronary heart disease. According to an NIH report, diets high in fruits and vegetables are associated with a 20 to 40 percent reduction in the occurrence of coronary heart disease. An array of substances in fruits and vegetables, including antioxidants, folate, fiber, potassium, flavonoids, and other phytochemicals, may be responsible for the decreased risk. Recent studies have added to the growing evidence that diets high in fruits and vegetables reduce important risk factors associated with coronary heart disease, hypertension, and high plasma lipid levels in particular. For example, a recent report combining data from women in the Nurses’ Health Study with men in the Health Professionals’ Follow-Up Study showed that men who ate an average of 10 servings and women who ate an average of 9 servings per day of fruits and vegetables had a 20-percent lower risk of coronary heart disease than men and women who ate an average of 2.5 to 3 servings a day. The lowest risks were observed for the men and women with the highest consumption of green leafy vegetables and vitamin-C-rich fruits and vegetables, such as strawberries, oranges and orange juice, Brussels sprouts, and red cabbage. That study found a 4 percent lower risk of coronary heart disease for each serving-per-day-increase in fruits and vegetables. Another study looked at the effect of a diet high in fruits and vegetables and low in fat—the Dietary Approaches to Stop Hypertension (DASH) diet—on plasma lipid levels and found reductions in plasma levels of total cholesterol, LDL and HDL, in all races and both sexes compared with the control diet. Furthermore, the DASH diet is also effective in lowering blood pressure. The beneficial effect of fruits and vegetables on coronary heart disease risk is also likely due, in part, to their high fiber and antioxidant activity. Cancer. Cancer is the second leading cause of death in the United States according to CDC. Over 550,000 cancer deaths are expected in 2002, and estimates are that about one-third of those deaths will be related to poor nutrition, a preventable cause of death. Indeed, reviews of more than 200 studies by the American Institute for Cancer Research and others indicate that the link between the consumption of fruits and vegetables and some cancers is consistent and strong. People who consume 5 or more servings daily have about one-half the cancer risk of those who consume 2 or fewer servings, according to an NIH report.Although there are still many unresolved questions regarding the association between cancer risk and the consumption of fruits and vegetables, ample scientific evidence indicates that the frequent consumption of a variety of fruits and vegetables protects against some cancers, particularly cancers of the mouth, pharynx, esophagus, stomach, colon, and rectum. The evidence also suggests reductions in the risk for cancers of the breast, pancreas, larynx, and bladder. Stroke. Stroke is the third leading cause of death in the United States. CDC reports that about 600,000 Americans have a stroke each year, of which about 160,000 will die. Studies have shown that the consumption of fruits and vegetables may decrease the risk of stroke through their effect on reducing hypertension (high blood pressure), an important risk factor for stroke. These studies show that the lowest risks for stroke are associated with high consumption of cruciferous vegetables (e.g., broccoli and cabbage), green leafy vegetables, citrus fruits, and vitamin-C-rich fruits and vegetables. For example, a recent analysis of 14 years of data from the Nurses’ Health Study and 8 years of data from the Health Professionals’ Follow-Up Study disclosed that each additional daily serving of fruits or vegetables was associated with a 4 to 7 percent reduction in the risk of stroke. A 2001 study confirmed the blood-pressure-reducing effect of the DASH diet (high in fruits, vegetables, and low-fat dairy products and low in saturated fat and cholesterol) and found further decreases in blood pressure when the DASH diet was combined with low sodium intake. In addition, a 2002 study of a representative sample of the U.S. population added support for the association between fruit and vegetable consumption and lower risk of stroke incidence and mortality. Diabetes. Diabetes is the sixth leading cause of death in Americans and is associated with a range of other serious, chronic ailments, including coronary heart disease, stroke, hypertension, blindness, kidney disease, and amputation. CDC reported that, during the last 10 years, diabetes increased 49 percent among adults, and that over 800,000 new cases and over 200,000 deaths were from diabetes-related complications each year. An analysis of 20-year follow-up data from nearly 10,000 men and women who participated in a 1970s study showed that individuals who developed diabetes had a lower average consumption of fruits and vegetables. Specifically, the study found an association between consuming 5 or more servings of fruits and vegetables daily and a lower incidence of diabetes. Furthermore, women who consumed 5 or more servings of fruits and vegetables per day were 39 percent less likely to develop diabetes compared with women who consumed little or no fruits and vegetables. There are a number of possible mechanisms by which fruit and vegetable consumption could affect diabetes, and additional studies will be needed to conclusively determine the relationship between fruit and vegetable consumption and diabetes. For example, fiber and magnesium in fruits and vegetables have positive effects on the primary manifestations of diabetes—the control of glucose and peripheral insulin sensitivity. The potential benefits in preventing diabetes also may stem from antioxidant vitamins and phytochemicals found in high levels in fruits and vegetables. Obesity. The link between obesity and the consumption of fruits and vegetables is receiving considerable attention from scientists, especially as the prevalence of obesity has increased. Over 60 percent of American adults are overweight, and about 13 percent of children and adolescents are seriously overweight. Overweight and obesity are important risk factors for a number of diseases, including heart disease, cancer, stroke, and diabetes. The Surgeon General’s 2001 report on obesity estimated the total medical cost associated with overweight- and obesity-related diseases at $117 billion in 2000.Although the link is not direct, fruits and vegetables may affect obesity through their relatively low-calorie level, high water content, palatability, and fiber content. The inclusion of fruits and vegetables in the diet has the potential to affect each of those factors. A study looking at the short-term effects of diet on calorie intake found that adding fruits and vegetables to lunch or dinner meals lowered the calories in the meal but did not affect palatability or feelings of fullness and hunger. Importantly, consuming meals with the added fruit and vegetables resulted in a 30 percent reduction in total caloric intake for the day. That study, and others like it, suggests that consuming foods of low-energy density, such as vegetables and some fruits, may be a useful strategy for weight loss and control. Adding fruits and vegetables to the diet was also explored as a weight loss strategy in a recent study of obese parents, whose normal-weight children are at risk for becoming obese. In that study, some families were encouraged to increase fruit and vegetable consumption while others were encouraged to decrease high-fat/high-sugar foods. The families that increased fruit and vegetable consumption had greater weight reduction. These data support the positive benefits of including fruits and vegetables in weight loss diets and suggest that an effective approach to weight loss might focus on increasing the consumption of healthy foods rather than emphasizing dietary restriction. Diverticulosis. Diverticulosis occurs when small out-pouches called diverticula develop in the large intestine (colon), a condition that affects an estimated one-half of Americans age 60 to 80, and almost everyone over age 80 according to the National Institutes of Health. An estimated 10 to 25 percent of individuals with diverticulosis develop diverticulitis—an infection or inflammation of these out-pouches—that can result in tearing, blockages, or bleeding if left untreated. High-fiber diets—especially those high in insoluble cellulose fiber—have been found to reduce the risk of diverticulosis and diverticulitis. Because fruits and vegetables are excellent sources of cellulose fiber— accounting for over 30 percent of the insoluble fiber in fruits and 50 percent or more in vegetables—an increase in the consumption of fruits and vegetables may be particularly important in helping prevent diverticulosis and its complications. USDA classifies fruits in two groups: (1) citrus, melons, and berries and (2) other fruits. USDA classifies vegetables in three groups: (1) dark-green leafy and deep- yellow vegetables; (2) starchy vegetables and dry beans, peas, and lentils; and (3) other vegetables. All five packages for women and children in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) provide foods for four of the food groups—grain, fruit, dairy, and protein; one package also provides a partial serving of the fifth food group—vegetable. The following are GAO’s comments on the Department of Health and Human Services’ letter dated June 28, 2002. 1. While performance measures based on relevant Healthy People 2010 objectives may be in the strategic plans of many HHS agencies, the objectives for fruit and vegetable consumption are not in the performance plans for the National Institutes of Health and the Centers for Disease Control and Prevention (CDC). Our recommendation addresses the need for strategies and targets specifically for the National Institutes of Health and the Centers for Disease Control and Prevention (CDC). 2. HHS agrees that continued emphasis on fruit and vegetable consumption is important to the total diet message, but that recommendations for increasing consumption should be in the context of a healthy diet and physical activity. Our report acknowledges the importance of a healthy diet and physical activity; however, because we were asked to examine fruit and vegetable consumption, our study and recommendations focus on that aspect of a healthy diet. 3. HHS stated that the draft report was too definitive in its statements about the relationship between fruit and vegetable consumption and the reduction of disease, and that including references would substantially strengthen the presentation. The sources for our information include September 2001 and November 2000 NIH reports summarizing the evidence on the relationship between fruits and vegetables and disease prevention, The Surgeon General’s Call to Action to Prevent and Decrease Overweight and Obesity: 2001, articles published in the Journal of the American Medical Association and the Annals of Internal Medicine, and documents from CDC, NIH, National Cancer Institute, and other HHS offices. In many instances, our statements are more conservatively couched than in the source documents. We have also added—and repeated—references throughout the report to clarify the sources of our information. The reports, documents, and information we received from HHS agencies were represented as reflecting current research. HHS did not identify evidence that might be contradictory to any links between fruit and vegetable consumption and prevention of diseases cited in our report. With regard to the cost data, we deleted the reference to USDA’s estimate of total diet-related costs for heart disease, cancer, stroke, and diabetes because it was not limited to costs solely for low fruit and vegetable consumption. 4. While HHS states that appendix IV does not reflect the totality of evidence, HHS did not identify evidence that we should cite that might be contradictory to any links between fruit and vegetable consumption and the prevention of diseases cited in the appendix. HHS further states that there is no “comprehensive recent detailed review by a recognized authoritative body from which such a summary of the evidence could be based…” Our sources included, among others, the November 2000 NIH/NCI report—5 A Day for Better Health Program Evaluation Report (NIH Pub. No. 01-4904)—and the September 2001 NIH/NCI report—5 A Day for Better Health Program Monograph (NIH Pub. No. 01-5019)—both of which review the relevant research demonstrating the linkages between fruit and vegetable consumption and disease prevention. We modified the appendix to clarify the strength of the linkages and added specific citations for the sources of our information. We also made revisions to the appendix bullets on stroke and heart disease, based on technical comments from HHS. 5. HHS describes the role and responsibility of the Food and Drug Administration for approving health claims that industry can make on food labels. To address HHS’ concerns that we may be perceived as a scientific body making authoritative statements that can be used by industry to seek approval for health claims, we have added a statement in the front of the report and in our scope and methodology appendix. That statement clarifies that “nothing in this report would constitute an authoritative statement that could be used, under section 403(r)(2) of the Federal Food, Drug and Cosmetic Act, to support a claim of a health benefit of any food; nor would anything in this report constitute valid support for a petition under section 403(r)(4) of the act to allow such a claim to be made.” 6. Our report cites the agreements that CDC, NCI, and USDA agencies have initiated to expand 5 A Day and that the memorandum of understanding establishes a framework for cooperation among the agencies to promote their commitment to encourage all Americans to eat 5 to 9 servings of fruits and vegetables daily. We believe that, if carried out, these commitments could provide a framework for helping Americans achieve the Healthy People 2010 nutrition objectives for fruits and vegetables. In addition, based on technical comments from HHS, we added information to the report regarding CDC’s surveillance of fruit and vegetable consumption and its concern for the need to improve the capacity of state programs that promote healthy behaviors to reduce the risk of chronic disease. In addition to those named above, Beverly A. Peterson, Terrance N. Horner, Jr., Nancy Bowser, Jay Cherlow, and Cynthia Norris made key contributions to this report. | Fruits and vegetables are a critical source of nutrients and other substances that help protect against chronic diseases. Yet fewer than one in four Americans consumes the 5 to 9 daily servings of fruits and vegetables recommended by the federal Dietary Guidelines for Americans. Fruit and vegetable consumption by the general public as a whole has increased by about half a serving under key federal nutritional policy, guidance, and educational programs, as shown by the national consumption data compiled by federal agencies. But key federal food assistance programs have had mixed effects on fruit and vegetables consumption, as shown by national consumption data. However, increasing fruit and vegetable consumption is not a primary focus of these programs, which are intended to reduce hunger and support agriculture. A number of actions the federal government could take to encourage more Americans to consume the recommended daily servings have been identified. These include expanding nutrition education efforts, such as the 5 A Day Program; modifying the special supplemental Nutrition Program for Women, Infants, and Children to allow participants to choose from more of those fruits and vegetables; expanding the use of the Department of Defense Fresh Fruit and Vegetable Project in schools; and expanding farmers' market programs for food assistance participants. These options could require additional resources or redirecting resources from other programs. |
A primary goal of the National Drug Control Strategy is to reduce the amount of illegal drugs entering the United States. South America is a major source of drugs, particularly cocaine, shipped through the transit zone to the United States. In 2000, ONDCP estimated that 31 percent of cocaine shipped from South America to the United States transited the Caribbean Corridor, and 66 percent came through the Mexico-Central America Corridor (which includes the Eastern Pacific). The remaining 3 percent went directly from South America to the continental United States (see fig. 1). According to the National Drug Control Strategy 2000 Annual Report, drug interdiction in the transit zone is intended to disrupt the flow of drugs, increase risks to traffickers, force traffickers to use less efficient routes and methods of delivery, and prevent significant quantities of drugs from reaching the United States. Drug interdiction operations may also produce information that can be used by domestic law enforcement agencies against trafficking organizations. According to the 1999 National Interdiction Command and Control Plan, a completed drug interdiction normally consists of six phases, some of which may occur simultaneously. (1) Provision of intelligence information to drug interdiction agencies indicating that a drug-smuggling activity is planned or underway. (2) Initial detection of a potential smuggling aircraft or vessel. (3) Monitoring, which consists of tracking a target aircraft or vessel (maintaining information on its position, course, and speed) and moving to intercept it. (4) Identifying drug-smuggling traffic from legitimate traffic. (5) Handoff, or shifting of primary responsibility between forces, such as from DOD to the Coast Guard. (6) Apprehending (detaining, arresting, or seizing) suspects, drugs, or vehicles or causing the suspects to jettison their drugs or to turn back from their mission. In this report, we use the term “drug interdiction” to refer to activities in any or all of these six phases. In a hypothetical example, drug interdiction agencies receive intelligence that a drug-smuggling aircraft will be leaving South America en route to the United States in the next few days. A DOD radar facility subsequently detects a small, low-flying plane on a known trafficking route. Alerted by this information and other intelligence, a military aircraft uses its radar system to track the plane. A Customs aircraft then approaches the suspect plane to make a visual identification. The Customs pilot observes the suspect plane dropping what appears to be a load of drugs to a waiting smuggling vessel in the waters below. The location of the vessel is given to the Coast Guard so that it can apprehend the drug-smuggling suspects and seize the drugs. The plane is tracked by the Customs pilot, and foreign law enforcement forces are alerted so that they can apprehend the drug- smuggling suspects when the plane lands at a foreign airfield. The interdiction roles of DOD, the Coast Guard, and Customs overlap regarding the types of activities they perform and the geographic areas they cover. Because of this, the agencies must cooperate and work together in order for interdiction to be successful. Five coordinating organizations also guide and support their activities. DOD serves as the lead federal agency for detecting and monitoring air and maritime transit of illegal drugs into the United States. DOD uses equipment such as Navy ships and aircraft, Air Force aircraft, and radar for this purpose (see fig. 2). The Coast Guard and Customs also provide aircraft and ships for detection and monitoring, but DOD coordinates and integrates their efforts. By statute, DOD personnel may not directly participate in a search, seizure, arrest, or other similar activity, unless authorized by law. As a result, DOD relies on U.S. or foreign law enforcement agencies to exercise civilian law enforcement powers to carry out that part of the interdiction effort. Within the transit zone, the Coast Guard is the lead agency for the apprehension of maritime drug traffickers. In addition, Coast Guard law enforcement detachments are required by statute to travel on board designated Navy ships for drug interdiction missions to perform law enforcement functions. During boarding operations, the Navy ships come under the operational control of the Coast Guard detachments. These detachments perform the actual search and seizure of a suspect vessel, and make any arrests, since the U.S. military is prohibited from doing so.(See fig. 3 for examples of Coast Guard assets.) Within the transit zone, Customs is co-lead with the Coast Guard for the apprehension of drug trafficking aircraft. The agency also assists the Coast Guard with apprehension of maritime drug traffickers. For example, a Customs aircraft equipped with surface search radar can detect and track a maritime vessel, then work with Customs boats and Coast Guard ships to apprehend the suspect drug traffickers. (See fig. 4 for examples of Customs assets.) Coordinating organizations help guide and support the three agencies’ drug interdiction efforts in the transit zone. Their transit zone roles are briefly discussed in table 1. Representatives from DOD, the Coast Guard, and Customs advise the U.S. Interdiction Coordinator, and staff JIATF- East, JIATF-West, and AMICC. Appendix II contains more detailed information on these organizations. DOD, the Coast Guard, and Customs do not track data on funds obligated and assets used specifically for transit zone drug interdiction. For the purposes of this review, we asked agency officials to attempt to isolate funds they obligated and assets they used for drug interdiction in the transit zone. DOD, the Coast Guard, and Customs attempted to produce such estimates, but because of substantial differences among the agencies’ methods and other limitations, these estimates were not reliable, and will not be presented in this report. specifically for transit zone drug interdiction. The three agencies provided us with estimates of those funds, but these estimates had a number of limitations. In the case of DOD, the agency could not isolate its funds obligated for transit zone drug interdiction for several reasons. First, DOD could not always distinguish funds obligated for detection and monitoring from those used for noninterdiction counterdrug activities. DOD provided us with a list of funds obligated for individual DOD programsfor fiscal years 1998 through 2000, broken out by country or geographic area and by type of counterdrug activity (such as detection and monitoring). However, there were instances where the individual funded program had multiple purposes. For example, funds DOD obligated to several programs in the Caribbean were used for both detection and monitoring and for noninterdiction counterdrug activities. Including such cases resulted in an overestimate of the funds obligated exclusively for detection and monitoring, and thus for transit zone drug interdiction. Second, the funds obligated did not include active-duty personnel costs,resulting in an underestimate of total funds obligated for transit zone drug interdiction. Third, funds obligated for detection and monitoring in the Eastern Pacific area could not be included in our analysis because DOD did not track funds obligated specifically for that area, resulting in an underestimate of total funds obligated for transit zone drug interdiction. As with DOD, the Coast Guard could not identify funds obligated for transit zone drug interdiction. The agency’s estimate did not cover the entire time period requested and was not specific to the transit zone. The Coast Guard provided us with estimates of funds obligated for fiscal years 1999 and 2000. According to Coast Guard budget officials, data were unavailable for 1998 because ONDCP did not require agencies to report their funds obligated for drug interdiction until 1999. In addition, the Coast Guard’s estimates of funds it obligated for drug interdiction were based on the hours spent on drug interdiction missions combined with a cost factor. Because the agency’s tracking system does not distinguish hours spent in the transit zone from hours spent in U.S. territorial waters, the data the Coast Guard provided to us included funds obligated for drug interdiction activities in both areas. Likewise, Customs could not isolate funds obligated for transit zone drug interdiction, in this case resulting in an underestimate of these funds. Three factors contributed to this underestimation. First, the data were not representative of all of Customs’ transit zone air interdiction efforts. Customs’ estimates of funds it obligated for transit zone interdiction were based on the recorded hours each aircraft spent on interdiction activities in the transit zone, multiplied by the average hourly operating cost for that type of aircraft. Customs provided data for specific flights made during fiscal years 1998 through 2000 where pilots indicated that their mission was in the transit zone. However, pilots were not required to record that information in Customs’ data system. For example, a Customs aircraft could take off from Miami and fly to a drug interdiction mission in the transit zone. Unless the pilot specified the location of the plane, those hours would be logged to the Miami Air Branch rather than to any specific geographic area. Customs was not able to estimate, for those flights in which the geographic area had not been recorded, the amount of time those pilots spent in the transit zone. Second, as with DOD, the Customs data did not include personnel costs because Customs does not track personnel costs by mission type. Third, Customs did not provide data on funds obligated for its marine unit’s transit zone drug interdiction activities because, according to a Customs official, marine unit data were unreliable prior to October 2000. The marine unit did not have a centralized reporting system before its merger with the air unit in 1999, and the agency’s data collection system was not modified to incorporate data from its marine assets until October 2000. Agency budget officials told us that although they do not track funds obligated for transit zone drug interdiction, they track funds obligated for drug interdiction in other ways that are more consistent with their responsibilities in the transit zone and elsewhere. For example, ONDCP requires that DOD, the Coast Guard, and Customs estimate funds obligated for each goal of the National Drug Control Strategy, including the goal of protecting America’s air, land, and sea frontiers from the drug threat. Budget officials from the Coast Guard and Customs told us that, in pursuit of this goal of the Strategy, their drug interdiction missions often involved activities that took place in both the transit zone and in U.S. territory or U.S. territorial waters. Because of this, these agencies focus on tracking budget data that are not specific to the transit zone. Coast Guard and Customs officials told us that tracking budget data specifically by transit zone would not enhance their capabilities to manage their overall drug interdiction responsibilities. Because DOD, the Coast Guard, and Customs do not track data on assets used (flight hours and ship days) explicitly for transit zone drug interdiction, the three agencies attempted to estimate this information in response to our request. As with their estimates of funds obligated, the asset estimates also had a number of limitations. Specifically, DOD provided data for fiscal year 2000, collected from JIATF-East and JIATF-West, on the amount of time DOD assets spent on detection and monitoring activities in the transit zone. We requested data for fiscal years 1998 and 1999 on DOD’s flight hours and ship days directly from JIATF-East and -West. In May 1999, JIATF-East changed the way it collected information on flight hours and ship days, when it began differentiating between total time spent on detection and monitoring missions (including time en route to the mission area) and the amount of time the asset was actually on-site. This change limits direct comparison of asset use data across the 3 fiscal years. The Coast Guard’s difficulties in estimating asset time used for drug interdiction in the transit zone were similar to its difficulties for estimating funds obligated. The agency provided us with asset data for fiscal years 1998 through 2000 but could not isolate its transit zone drug interdiction time because plane and ship crews do not track their time that way. The Coast Guard records flight and ship hours by type of mission (such as drug interdiction, migrant interdiction, or fisheries enforcement), but not by zone. As with its data on funds obligated, the asset data provided by Customs underestimates the time it used for transit zone drug interdiction. We received flight hour data from Customs for fiscal years 1998 through 2000. According to Customs officials, the asset data represent a substantial undercount of actual drug interdiction time in the transit zone for two reasons. First, pilots are not required to record the location of their missions in Customs’ data system. Customs, therefore, only provided us with flight hour data in which that data field was filled in. As a result, one Customs official estimated that the data we received were missing “well over” 25 percent of the agency’s transit zone flight hours. Second, Customs could not provide us with reliable data on its transit zone drug interdiction ship days. Before the merger of the air and marine units in 1999 and modifications to the agency’s data collection system in October 2000, the marine unit did not have a centralized reporting system. The measures of results which DOD, the Coast Guard, and Customs track to demonstrate their effectiveness in transit zone drug interdiction varied during fiscal years 1998, 1999, and 2000. Each agency collected different kinds of measures, which varied in terms of whether they focused on detection and monitoring or on drug seizures and whether or not they focused specifically on the agency’s activities in the transit zone. DOD is developing measures of results that focus on its role in the detection and monitoring of drug trafficking, and are specific to the transit zone. The Coast Guard tracks the amount of drugs seized, as well as the cocaine seizure rate, although neither measure is specific to the transit zone. Customs tracked transit-zone specific measures, including drugs seized as a result of Customs assistance, up until fiscal year 1999, and then began to track results of its detection and monitoring efforts more generally, not just in the transit zone. DOD, the Coast Guard, and DOD, the Coast Guard, and Customs tracked different measures of results Customs Track Different Results to assess their effectiveness in transit zone drug interdiction efforts. DOD and Customs focused on the results of detection and monitoring efforts and the Coast Guard focused more on seizure-based information, which is consistent with their roles. Table 2 presents examples of DOD’s, the Coast Guard’s, and Customs’ measures of results during fiscal years 1998 through 2000. DOD began tracking measures of results in fiscal year 2000, as a step towards developing formal measures of effectiveness. DOD is not allowed to make drug seizures, and thus has begun to develop measures of results that focus on its role in detection and monitoring. DOD’s measures of results are specific to the transit zone, and include, among other things, the amount of cocaine seized in the transit zone where ships, planes, or radar under DOD’s control were the initial detection assets, as well as the proportion of cocaine seized out of the total estimated amount of cocaine flow in the transit zone where ships, planes, or radar under DOD’s control were the initial detection assets. DOD also collected more detailed information on these results, broken down by specific types of ships, planes, and radar under the control of JIATF-East and JIATF-West, in order to examine the effectiveness of each type of asset. All results data are classified. During fiscal years 1998 through 2000, the Coast Guard tracked: (1) the amount of cocaine seized, (2) the amount of marijuana seized, and (3) the cocaine seizure rate. The amount of cocaine or marijuana seized is not isolated to the transit zone. Tracking the amount of drugs seized alone as a measure of effectiveness has limited utility. That is because increased seizures may be a function of either the increased effectiveness of interdiction agencies or increased drug flow into the United States. The Coast Guard has attempted to address this issue by tracking the cocaine seizure rate. The cocaine seizure rate is the amount of cocaine seized as a percentage of the total estimated cocaine flow into the United States.None of these measures are specific to the transit zone. In fiscal year 1998, Customs kept track of three transit-zone specific measures of results, including the amount of cocaine and marijuana seized in the transit zone and the track rate (whether suspect air targets were successfully tracked). According to agency officials and Customs reports, transit zone cocaine and marijuana seizures were those made by foreign law enforcement agencies with assistance from Customs air or marine assets. In fiscal year 1999, Customs dropped the track rate but continued to track transit zone cocaine and marijuana seizures. As of fiscal year 2000, Customs no longer tracked measures of results specific to the transit zone. Customs officials said that the agency’s primary responsibility in the transit zone is to provide detection and monitoring and support to agencies, such as the Coast Guard and foreign law enforcement, that are charged with apprehending drug smugglers. Customs officials also said that its air and marine assets travel across the source, transit, and arrival zones and are not isolated to the transit zone. Due to these factors, Customs has discontinued reporting transit-zone specific measures, including seizures, and is developing new measures of effectiveness of its air and marine assets in detecting drug smugglers. The new measures include the number of incidents in which cocaine that was intended to enter the borders of the United States is dropped by smuggler aircraft before entering the country and the number of times drug-smuggling aircraft enter the United States from outside of its borders. These measures are reported in Customs’ reports as baselines for future data collection. None of the new measures isolates the results of Customs’ activities in the transit zone. Both the Coast Guard and Customs have counted the same cocaine seizures in their individual agency databases and subsequently reported the same seizures as contributing to the measures of results used to track their effectiveness in drug interdiction. And although DOD does not seize drugs, it has begun to track whether DOD ships, planes, and radar participated in detection and monitoring activities that resulted in cocaine seizures in the transit zone, and many of these cocaine seizures are the same ones reported by the Coast Guard or Customs. Agency officials we spoke with told us that they believe it is appropriate for each agency to get credit for its involvement in seizing cocaine, since without the participation of any one agency, the seizure may not have occurred. Agencies have also established mechanisms designed to ensure that the seizures in which they participate are being reported accurately. We identified two interagency databases—the FDSS, and the Consolidated Counterdrug Database (CCDB)—which were designed, among other things, to improve the accuracy of cocaine seizure data when multiple agencies participate in the seizures. While designed to improve accuracy, neither the agencies’ database controls nor the two interagency databases were designed to prevent multiple agencies from each counting the same seizure in their specific agency databases. In fiscal years 1998, 1999, and 2000, both the Coast Guard and Customs counted in their agency annual cocaine seizure statistics some of the same transit zone seizures. These seizures ultimately contributed to the totals reported by the two agencies as measures of their effectiveness in drug interdiction. The Coast Guard counted towards its overall agency cocaine seizure totals cocaine seizures made by Coast Guard personnel, or those made by Coast Guard personnel working with Customs, other federal law enforcement agencies, or foreign law enforcement agencies. Coast Guard officials told us that the Coast Guard only claims credit for participating in seizures made by other agencies (including foreign law enforcement agencies) when the Coast Guard was the lead agency in the drug interdiction operation, or when its participation was “substantial.” In fiscal year 2000, the Coast Guard’s database shows that it made 58 cocaine seizures, totaling 132,000 pounds. Of the 58 cocaine seizures, the Coast Guard data show that in 38 instances multiple agencies participated in the seizure. Of these 38, 13 instances involved Customs, 16 instances involved DOD (through the use of Coast Guard law enforcement detachments working on board U.S. Navy ships), and 9 instances involved local, other federal or foreign agencies, but not Customs or DOD. In reporting overall agency cocaine seizures, the Coast Guard included all 58 seizures as part of the total the agency seized during fiscal year 2000 (and thus contributing to the Coast Guard’s seizure rate during that year), but the reports did not indicate whether or not the seizures were made with the assistance of other agencies. Customs reported transit zone cocaine seizures in which it participated in two ways. First, in fiscal years 1998 and 1999, it reported the amount of cocaine seized by foreign law enforcement agencies as a result of Customs assistance in the transit zone. In fiscal years 1998 and 1999, Customs data showed that it assisted in 20 transit zone cocaine seizures, totaling about 19,000 pounds of cocaine. These totals included 6 cocaine seizures (totaling about 8,000 pounds) that were also included in the Coast Guard’s statistics because the Coast Guard had also participated in the seizure. As discussed in the previous section of this report, Customs discontinued reporting transit zone seizures as a measure of effectiveness after fiscal year 1999. Second, Customs also reported the overall amount of cocaine seized by the agency in fiscal years 1998 through 2000. This measure includes instances where Customs participated in transit zone seizures with other federal agencies. According to Customs officials and our review of Customs seizure data, Customs personnel participated in transit zone cocaine seizures in different ways, including (1) working on multiagency drug-smuggling investigations which produced intelligence that resulted in cocaine seizures by other federal agencies; (2) detecting, monitoring, or tracking suspect drug-smuggling aircraft or vessels that were ultimately apprehended by other federal agencies; or (3) discovering cocaine during searches of vessels seized by other federal agencies and escorted to U.S. ports. In reporting overall cocaine seizures in fiscal years 1998, 1999, and 2000, Customs did not report separately the number of seizures resulting from these different types of participation. Our review of 26 transit zone cocaine seizures that were described in federal government press releases during fiscal years 1998, 1999, and 2000 showed that both the Coast Guard and Customs counted the same 16 of the 26 seizures in their respective seizure databases and both subsequently reported these 16 seizures as contributing to each agency’s measure of results used to track effectiveness in drug interdiction. A review of the Coast Guard’s and Customs’ seizure databases, and two interagency seizure databases, showed that both agencies participated in some capacity in each of these 16 seizures. The Coast Guard’s role in these seizures included seizing cocaine after pursuing, apprehending, and boarding drug-smuggling vessels and locating and retrieving bales of cocaine jettisoned by smugglers during a pursuit. Customs’ role in these seizures included participating in drug-smuggling investigations that resulted in a number of the seizures; detecting, monitoring, and tracking drug-smuggling aircraft or vessels; and searching vessels that had been apprehended by the Coast Guard. Although DOD is not authorized to seize drugs, it had begun to track in fiscal year 2000 the amount of cocaine seized as a result of detection and monitoring by its ships, planes, and radar under the control of JIATF-East or JIATF-West, in an attempt to determine how well DOD detection and monitoring assets contributed to the drug interdiction effort. These totals include only those cocaine seizures where JIATF-East or JIATF-West assets were determined to be the first assets to detect the drug-smuggling aircraft or vessel. If the actual cocaine seizures were made by the Coast Guard, Customs, or other U.S. or foreign law enforcement agencies, these seizures would also be counted in those agencies’ statistics. Agency officials with whom we spoke recognized that a number of agencies may be reporting the same cocaine seizures. They told us that it was appropriate for each agency to count a seizure in which it participated, for a number of reasons. First, agency time and effort had been expended on the seizure. Second, there was a central clearinghouse—the FDSS—where federal agencies were to report cocaine seizure data. Those data, rather than individual agency data, could be used to determine the overall amount of cocaine seized by federal agencies in the United States. Third, interagency cooperation would be hindered if only one agency could receive “credit” for a specific seizure that involved several participating agencies. In many cases, the seizure would not have occurred if any one of the participating agencies had been absent. DOD, the Coast Guard, and Customs publicized cocaine seizures made during fiscal years 1998, 1999, and 2000 in 21 press releases that were available on the agencies’ web sites, and in all but 1 of the press releases, multiple agencies were credited as having participated in the seizures. A review of the Coast Guard’s and Customs’ seizure databases, and two interagency seizure databases, showed that there was general consistency between what was stated in the press releases and what was reported in the seizure databases, in terms of the size of the seizures, and the agencies listed as participating in the seizures. Also, in more than two-thirds of the press releases, the seizure was highlighted as an example of successful interagency cooperation. The Coast Guard and Customs appeared to have established controls in their agency seizure databases with the goal of recording accurately the size of the cocaine seizures that were made and preventing multiple counting of cocaine seizures within the same agency. Both the Coast Guard’s and Customs’ controls included the assignment of unique case identification numbers to seizures, supervisory or headquarters review of seizure amounts, and reconciliation of agency data with interagency drug seizure databases. DOD, in tracking cocaine seizures that resulted from detection and monitoring by assets under the control of JIATF-East and JIATF-West, had also established a number of controls designed to ensure accurate reporting. DOD’s controls included tracking assets that were involved in cocaine seizures by a unique case identification number and reconciling data with an interagency database. These procedures have been designed so that each agency has an accurate count of the cocaine seizures in which it participated. More information about agency database controls appears in appendix IV. The FDSS was established in 1989 with the goal of eliminating the multiple counting of drug seizures, so that policymakers could determine the overall amount of drugs seized by federal agencies. The database is managed by DEA. Prior to the establishment of the system, there was no central clearinghouse for seizure reporting, and the only way to obtain a total for the amount of drugs seized by federal agencies was to add up the amount of drugs reported as seized by each agency. Because of the overlap between various agencies’ records, this resulted in an overstated amount. Although the FDSS has controls to prevent the same seizures from being counted more than once, FDSS was not designed to prevent individual agencies from reporting the same seizures in their own databases. The El Paso Intelligence Center (EPIC) serves as the central clearinghouse for the reporting of drug seizure data to the FDSS. Five federal agencies, including the Coast Guard and Customs, currently report drug seizures to the FDSS. When a representative from one of these five agencies calls in to report a seizure, EPIC issues a unique identification number, and various information on the seizure—such as weight and type of drug, date and time of the seizure, location of the seizure, reporting agency, and participating agencies—is recorded in a computerized log. EPIC has instituted a number of procedures to prevent the same seizure from being reported to FDSS more than once by the same, or different, agencies. These include automated and manual checking of records to identify potential multiple reports, requirements that changes to seizure information be made by the same agency official who made the first report, and automated tracking of changes made by EPIC personnel to the automated log. FDSS officials told us that FDSS procedures are not designed to prevent more than one agency from reporting any one seizure in its own database. Likewise, our review of a nonrepresentative sample of cocaine seizures made by the Coast Guard and Customs showed that when both agencies participated in a seizure, some seizures that were reported to EPIC were also counted in each agency’s individual reports. The CCDB contains information on nearly all air and maritime cocaine seizures made in the transit zone and is used by DOD, the Coast Guard, and Customs as a check on the accuracy of their agencies’ cocaine seizure data. Unlike the FDSS, the CCDB database includes information on transit zone cocaine seizures made by foreign law enforcement agencies, as well as by U.S. law enforcement agencies. The CCDB also includes information on cocaine-smuggling events that were observed by drug interdiction agencies, but did not result in seizures (such as suspected drug-smuggling vessels that were pursued, but got away), and other cocaine-smuggling events that were believed to have occurred based on reliable intelligence reports but where no confirmation of their occurrence was received. The CCDB data are used as a source for estimates of cocaine flow through the transit zone. The CCDB database manager said that the accuracy of the database comes in large part from the opportunity for interagency discussion and review of information in the database. The CCDB, like the FDSS, is not designed to prevent individual agencies from each counting a specific seizure as its own. The CCDB is managed by USIC. DOD, the Coast Guard, and Customs, as well as other agencies, submit to the CCDB database manager information on cocaine seizures. Representatives from agencies involved in transit zone interdiction, including DOD, Customs, and the Coast Guard, meet quarterly to discuss each seizure that has been made in the previous quarter. Topics discussed include which agency and which asset first detected the drug-smuggling aircraft or vessel, the location of the seizure, and other participating agencies and assets. When discrepancies exist in the information that has been reported to the CCDB manager on the size of cocaine seizures, or regarding the agencies and assets that participated in the seizure, the agency representatives discuss the discrepancies and, if needed, vote on what information they believe is most accurate. Following the meeting, the CCDB database manager sends the revised data back to the agency for an additional review. Representatives from ONDCP and DOD charged with preparing semiannual cocaine flow estimates regularly attend the conference. In April 2001, we attended a quarterly CCDB meeting and observed that these representatives were concerned with maintaining the standards for entering data into the database, and thus appear to serve as an additional check on the overall accuracy of the data. Although the CCDB may be used by individual agencies to validate information in an agency’s database, the database is not designed to prevent multiple agencies from counting the same seizure in their own databases. Our review of a nonrepresentative sample of cocaine seizures made by the Coast Guard and Customs showed that when both agencies participated in a seizure, and information about the seizure appeared in the CCDB, some seizures continued to be counted in each agency’s individual reports. DOD, the Coast Guard, and Customs each play a role in interdicting drugs in the transit zone. It is difficult to determine the funds obligated and the flight hours and ship days used by the agencies for this effort, and the results of these efforts, because the agencies tend not to track data specifically by the transit zone. Instead, agency officials said they track data in ways that are consistent with their more general responsibilities in the transit zone and elsewhere and indicated that tracking data specifically by the transit zone would not enhance their capabilities to manage their drug interdiction responsibilities. It is not uncommon for cocaine seizures to involve the efforts of more than one of these agencies. In an effort to ensure the accuracy of seizure data collected, agencies we reviewed had established controls in their own seizure databases, reported seizures to a central clearinghouse—the FDSS—and participated in quarterly CCDB meetings where they discussed the details of specific transit zone seizures. Managers of the FDSS and CCDB had also established controls to ensure the accuracy of the data reported to them by the agencies. We believe that the interagency databases can provide policymakers with useful information about the results of the overall effort by U.S. and foreign agencies to interdict cocaine in the transit zone. However, the agency database controls and the interagency databases are not designed to prevent the same cocaine seizures from being reported by more than one agency. We requested comments on a draft of this report from the Secretaries of Defense and Transportation; the Commissioner of the U.S. Customs Service; the Administrator of the U.S. Drug Enforcement Administration; and the Director of the Office of National Drug Control Policy. The agencies concurred with the report. They also provided technical comments, which have been incorporated in this report where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from its issue date. At that time, we will send copies of the report to the Senate and House Judiciary Committees, the Senate Caucus on International Narcotics Control, the Secretaries of Defense and Transportation, the Commissioner of the U.S. Customs Service, the Administrator of the U.S. Drug Enforcement Administration, the Director of the Office of National Drug Control Policy, and the Director of the Office of Management and Budget. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. If you or your staff have any questions concerning this report, please contact me on (202) 512-8777. To describe the roles of the Department of Defense (DOD), the U.S. Coast Guard (Coast Guard), and the U.S. Customs Service (Customs) in transit zone drug interdiction, we interviewed officials of those three agencies and officials at selected counterdrug coordinating organizations, including the Office of National Drug Control Policy (ONDCP), and the U.S. Interdiction Coordinator (USIC). We also visited the Joint Interagency Task Force East (JIATF-East), the Joint Interagency Task Force West (JIATF-West), Customs’ Air and Marine Interdiction Coordination Center (AMICC), and Coast Guard and Customs field offices in the Miami, Florida, area. We reviewed documents guiding the transit zone interdiction effort, such as the 1999 National Interdiction Command and Control Plan, the National Drug Control Strategy 2000 Annual Report, and agency authorizing legislation. We also reviewed our published reports and testimonies (see Related GAO Products). To determine the extent to which we could identify the funds obligated and assets used for transit zone drug interdiction activities of DOD, the Coast Guard, and Customs, we interviewed agency and ONDCP budget officials, reviewed agency budget documents, and reviewed data on flight hours and ship days from agency data systems. To obtain data on funds obligated for transit zone drug interdiction, we requested these data for fiscal years 1998 through 2000 from agency budget officials using a structured interview format. The officials told us that they do not track data on funds obligated specifically for transit zone drug interdiction. We then asked these officials to attempt to estimate the transit-zone specific obligation data, which the officials did. We also requested data on flight hours and ship days used for transit zone drug interdiction activities in fiscal years 1998 through 2000 from officials at DOD, the Coast Guard, Customs, JIATF-East, and JIATF-West. We reviewed the resulting agency data on funds obligated, and the data on flight hours and ship days from agency data systems for fiscal years 1998 through 2000 (1998 data on funds obligated were not available for the Coast Guard), and identified several limitations. We interviewed budget officials from DOD, the Coast Guard, and Customs regarding the limitations of their estimates of funds obligated for transit zone drug interdiction. We also interviewed officials from DOD, the Coast Guard, Customs, JIATF-East, and JIATF-West regarding the limitations of their data on flight hours and ship days. In addition, we interviewed ONDCP budget officials, reviewed a study commissioned by ONDCP on agency methods for calculating their counterdrug budgets, and interviewed the study’s authors. To identify what results DOD, the Coast Guard, and Customs track to demonstrate their effectiveness in transit zone drug interdiction, we interviewed agency headquarters officials and obtained available results data for fiscal years 1998 through 2000. Along with the information provided by agency officials, we also reviewed annual performance reports required by the Government Performance and Results Act of 1993 for DOD, the Coast Guard, and Customs, for fiscal years 1998 through 2000. To determine whether multiple agencies are reporting the same cocaine seizures, we interviewed managers of agency seizure databases at DOD, the Coast Guard, and Customs to identify how agencies record and report cocaine seizures in which they participate. We reviewed user’s guides, training manuals, and written policies and guidance for the agency seizure databases (these documents are listed in app. III). We also obtained cocaine seizure data for fiscal years 1998 through 2000 from the Coast Guard and Customs and cocaine seizure data for fiscal year 2000 from DOD. To identify procedures and data systems in place to ensure the accuracy and completeness of cocaine seizure data when multiple agencies participate in seizures, we identified two interagency data systems where such seizures are recorded—the Federal-wide Drug Seizure System (FDSS) and the Consolidated Counterdrug Database (CCDB). We interviewed managers of the two systems and reviewed user’s guides, training manuals, and written policies and guidance (these documents are also listed in app. III). We also attended one of the quarterly CCDB conferences, at which representatives of the agencies involved in transit zone interdiction meet for 5 days to discuss the accuracy and completeness of data provided by the agencies to the CCDB manager during the previous quarter. In addition to reviewing related documentation and reports and interviewing agency officials (as discussed above), we further assessed the reliability of the data in the agency and interagency databases by comparing the records in the Coast Guard and Customs databases and the FDSS and CCDB for a nonrepresentative sample of seizures that were reported in agency press releases. We selected the universe of publicly available agency press releases in fiscal years 1998, 1999, and 2000 and compared the information in them with information in agency and interagency seizure databases. We located the press releases through an Internet search of DOD’s (headquarters, Air Force, Navy, Southern and Pacific Commands, and JIATF-East and JIATF-West), the Coast Guard’s (headquarters and districts), and Customs’ (headquarters) web sites. We found 21 press releases referring to 27 cocaine seizures made during fiscal years 1998, 1999, and 2000. We requested from the database managers for the Coast Guard, Customs, the FDSS, and the CCDB available information on 26 of these seizures from their respective databases and any other supporting information that the managers may have maintained. Information was available on 25 of the 26 seizures from the Coast Guard database, 18 of the 26 seizures from the Customs database, 25 of the 26 seizures from the FDSS, and all 26 seizures from the CCDB. We compared the press releases with information provided by the agencies from their databases, in terms of the reported amount of cocaine seized, and the agencies listed as participating in the seizure. While we determined that the data were reliable enough for our purposes, our ability to generalize from our review of agency cocaine seizure press releases is limited by the fact that the seizures described in the press releases do not constitute a representative sample of all agency cocaine seizures. We conducted our work from January through October 2001 in accordance with generally accepted government auditing standards. The Office of National Drug Control Policy (ONDCP) was established by the Congress to set policies, goals, priorities, and objectives for national drug control; develop a national drug control strategy; and coordinate and oversee the implementation of the strategy, among other things. ONDCP is the President’s primary policy office for drug issues. ONDCP oversees and coordinates the drug control efforts of U.S. federal agencies engaged in implementing the strategy and managing programs, but does not manage drug control programs itself. ONDCP also has authority to review various agencies’ drug control budget requests, including the Department of Defense (DOD), the U.S. Coast Guard (Coast Guard), and the U.S. Customs Service (Customs), to ensure they are sufficient to implement the objectives of the national strategy, but it has no direct control over how agency budgetary resources are used. The U.S. Interdiction Coordinator (USIC) provides strategic advice and oversight for international interdiction efforts in the source and transit zones. The USIC is designated by the director of ONDCP. The current USIC is the Commandant of the Coast Guard, who is advised by representatives from DOD, the Coast Guard, Customs, the Drug Enforcement Administration (DEA), and the Central Intelligence Agency. The State Department position was vacant at the time of our review. The USIC reports to the Director of ONDCP regarding two areas: (1) what resources are needed to achieve the objectives of the National Drug Control Strategy in the future and (2) how interdiction assets are performing. However, the USIC does not possess authority to exercise operational control of employed assets or field operations. USIC also organizes conferences three or four times a year, attended by organizations that are involved with international interdiction, to discuss interdiction issues and the status of interdiction efforts. The conferences allow principal players from various parts of the drug interdiction effort (law enforcement, intelligence, and the military) to discuss drug interdiction issues. The Joint Interagency Task Force East (JIATF-East) is the primary center for detection, monitoring, identification, and handoff of suspect air and maritime drug trafficking events in part of the Pacific Ocean; the Gulf of Mexico, Caribbean Sea, Mexico, Central America and surrounding seas; the Atlantic Ocean; and the continental landmass extending to the southern end of South America. JIATF-East also focuses on support to foreign nations’ counterdrug initiatives and the detection, monitoring, and handoff of suspect drug targets to foreign law enforcement agencies. JIATF-East hands off control of operations to law enforcement agencies (such as the Coast Guard) at the arrest stage of an event. Detection and monitoring responsibilities within this area of responsibility extend to within 100 nautical miles from the continental United States for air targets, to continental U.S. territorial seas for maritime targets, and to the U.S. territorial seas of Puerto Rico and the U.S. Virgin Islands for both air and maritime targets, and the Bahamas. JIATF-East is under the command of DOD’s Southern Command and is staffed by representatives from DOD, the Coast Guard, Customs, the Federal Bureau of Investigation (FBI), and DEA. The Joint Interagency Task Force West (JIATF-West) focuses primarily on illegal drugs originating in southeast and southwest Asia, support of foreign nations’ and U.S. country teams’ counterdrug initiatives, and the detection, monitoring, and identification of suspect drug targets in their area of responsibility for subsequent handoff to U.S. or foreign law enforcement authorities. Its area of responsibility includes part of the Pacific Ocean. Detection and monitoring responsibilities within this area extend to the U.S. territorial seas for maritime targets and up to 100 nautical miles from the continental United States for air targets. Additionally, JIATF-West may provide counterdrug support outside of its area of responsibility for foreign nations (such as Mexico). JIATF-West is under the command of DOD’s Pacific Command and is also staffed by representatives from DOD, the Coast Guard, Customs, the FBI, and DEA. Customs’ Air and Marine Interdiction Coordination Center (AMICC) identifies aircraft coming to the U.S. border and coordinates the interception and apprehension of suspects. Its area of responsibility extends 100 nautical miles from the U.S. landmass (except for the territory of the Bahamas, which is within the area of responsibility assigned to JIATF-East). AMICC is the primary center responsible for the identification of aircraft tracked within the JIATF-West area of responsibility and the transit zone portion of the JIATF-East area of responsibility. AMICC uses the Federal Aviation Administration’s flight system and more than 70 radar to identify and track aircraft. In addition, AMICC supports U.S. drug interdiction operations from airfields in Mexico and Aruba and assists the Mexican government’s law enforcement.AMICC also provides support for drug interdiction activities in the Caribbean. AMICC is staffed by detection systems specialists from DOD, the Coast Guard, and Customs; intelligence research specialists; and communications specialists. Counterdrug Performance Results. Memorandum from the Office of the Department of Defense Coordinator for Drug Enforcement Policy and Support, October 10, 2000. Aviation Marine Operations Reporting System, Student Guide. U.S. Customs Service Air and Marine Interdiction Division. Air and Marine Operations Reporting System Users Guide. U.S. Customs Service Air and Marine Interdiction Division. SEACATS Search/Arrest/Seizure Procedures. U.S. Customs Service Office of Information and Technology and Office of Field Operations, March 2001. SEACATS Air/Marine/Foreign Incident Report (AMFIR) Short Form Seizure Case Initiation. U.S. Customs Service Office of Information and Technology and Office of Field Operations, February 2001. EPIC Procedures for Issuing Federal Drug Identification Numbers, from EPIC Watch Operations Training Book, El Paso Intelligence Center, Drug Enforcement Administration. Federal-wide Drug Seizure System Automated FDIN Log User Guide. Drug Enforcement Administration, May 1991. Federal-wide Drug Seizure System Data Element Description and Validation Criteria for FDSSLOG File and FDSS Master File. Drug Enforcement Administration. Federal-Wide Drug Seizure System Federal Drug Identification Number (FDIN). El Paso Intelligence Center, July 2000. FDIN Threshold Weights and Equivalents. Drug Enforcement Administration, January 1, 2001. Consolidated Counterdrug Database (CCDB) User’s Guide. U.S. Interdiction Coordinator’s Office, February 2001. This appendix contains additional information on the controls in the U.S. Coast Guard’s (Coast Guard) and U.S. Customs Service’s (Customs) seizure databases designed to ensure the accuracy of cocaine seizures reported by these agencies. The appendix also contains additional information on the databases that the Department of Defense (DOD) uses to track cocaine seized in the transit zone as a result of detection and monitoring by ships, planes, and radar under the control of the Joint Interagency Task Force East (JIATF-East) or the Joint Interagency Task Force West (JIATF-West). The Coast Guard had instituted the following controls to ensure that it was reporting an accurate total for the cocaine seizures that it made. These included: (1) tracking cocaine seized in the transit zone by a unique case identification number, (2) reviewing each seizure at several levels of the Coast Guard command structure, and (3) reconciling the Coast Guard data with data reported to two interagency databases. Coast Guard cocaine seizures are to be reported by the Coast Guard district in which the seizure occurs. If the seizure is above a specified weight threshold, the district reports the approximate amount, the location of the seizure, and which Coast Guard unit made the seizure to the El Paso Intelligence Center (EPIC), which serves as the central clearinghouse for reporting of drug seizure information to the Federal- wide Drug Seizure System (FDSS). EPIC assigns a unique identification number that the Coast Guard uses to track the seizure. The weight of drugs seized will be an approximate number until the drugs arrive on shore and are turned over to Customs or, in some cases, to DEA or a foreign law enforcement agency. Coast Guard district staff contact the agency with control of the drugs to find out the final weight of the seizure and report any revisions to EPIC using the already assigned identification number. In addition to reporting to EPIC, Coast Guard districts report cocaine seizures up the chain of command to a Coast Guard headquarters database manager. The manager receives information from a variety of sources (e.g., electronic mail message traffic from Coast Guard ships, district reports, and area reports). When discrepancies exist between the various sources, the manager checks with each source to determine the most accurate information on the seizure. The database manager also checks Coast Guard cocaine seizure data against two interagency databases. Each quarter the manager receives a list of the Coast Guard seizures that were reported to EPIC. The weight of cocaine seizures listed by EPIC may vary from the weight reported by the districts to Coast Guard headquarters, because a final weighing of the drugs after the cocaine was turned over to Customs may have resulted in a different amount. The manager, as well as personnel from Coast Guard’s field units, also attends the quarterly Consolidated Counterdrug Database (CCDB) conference in which all seizures in the transit zone are discussed. He said that there was an instance when he had received information from EPIC about a number of seizures made by the Coast Guard that did not appear in the Coast Guard’s database. At the CCDB conference, representatives from the Coast Guard’s field units had further information on the seizures that helped clarify whether they should or should not be included in the Coast Guard’s seizure database. The Coast Guard cocaine seizure data was stored in spreadsheet format at Coast Guard headquarters. The spreadsheet was linked, electronically, to written documentation that supported each seizure, such as electronic mail messages from Coast Guard ships and daily operational summaries from Coast Guard districts, so that a paper trail existed to support changes made to seizure information in the database. According to the Coast Guard database manager, this information, as well as the hardcopies supporting any changes made to the information in the database, were deleted about 6 months after it was received. Customs had instituted the following controls in its seizure database: (1) assignment of a unique agency case identification number to each seizure where Customs assisted with the seizure, seized drugs, or took custody of drugs seized by other agencies; (2) reconciliation with the CCDB; (3) supervisory review of seizure reports, and periodic review of seizure information by headquarters personnel; and (4) automated tracking of any changes made to the seizure database. Customs’ transit zone cocaine seizures were recorded in two different ways. First, if Customs assisted a foreign agency with a cocaine seizure, Customs air or marine enforcement officers were to report information about their participation in the seizure to a Customs database. Customs field and headquarters personnel subsequently obtained information on the seizure from the seizing agency, and input the information into the database, with a special code signifying that Customs was an assisting agency, rather than a seizing agency. Reported seizure amounts are reconciled with the CCDB data at the quarterly CCDB conferences. Second, if Customs took custody of a cocaine seizure turned over to it by another agency, Customs personnel tracked the seizure in its tracking system. In both cases, a unique identification number would be associated with the seizure, so that all information on the seizure that resided in separate Customs’ data systems (such as the flight hours spent tracking a suspect aircraft) would be linked to the seizure record. Customs required review of each seizure report by a supervisor. In addition, headquarters personnel periodically review seizure reports and check for duplicate records and seizure amounts that appear to be peculiar. Customs officials told us they believed these controls had prevented instances of multiple counting of seizures by two separate branches of the agency. Customs’ seizure data system is constructed so that updates, deletions, and revisions of seizure records are automatically tracked. Thus, deleted records do not disappear from the system. In fiscal year 2000, DOD began to collect data from JIATF-East and JIATF-West on the amount of cocaine seized in the transit zone as a result of detection and monitoring by ships, planes, and radar under the control of the JIATFs. The JIATFs are each required to provide to DOD a report on how much cocaine was seized as the result of detection by assets under their control, broken out by type of asset (e.g., detection by DOD ground- based radar, Customs surveillance aircraft, or Coast Guard helicopters). The JIATFs had each instituted the following controls to ensure that the data they reported on cocaine seizures were accurate: (1) the tracking of assets and drug-smuggling events by specific identification numbers and (2) reconciling the internal JIATF data with data in the CCDB. Although these controls may help ensure more accurate reporting, there remains the potential for multiple counting of seizures in those instances where a drug- smuggling aircraft or vessel crosses over both the JIATFs’ areas of responsibility. In those cases, both JIATFs may report that an asset under their control was the initial detection asset for a particular cocaine seizure, thus potentially inflating the overall amount of cocaine seized that is reported to DOD. A DOD official informed us that DOD has provided guidance to the two JIATFs that the CCDB is to be used as one source for the report. However, DOD has left it up to each JIATF to determine how the CCDB will be used, and what other specific data sources and calculations will be used to provide data for the report. Analysts for JIATF-East and JIATF-West informed us that the data they report to DOD on cocaine seizures are derived from a combination of two sources: (1) data maintained internally by each JIATF on the activity of each asset under its control and (2) data from the CCDB regarding the type of aircraft, ship, or radar that was the initial detection asset for each seizure. Each JIATF maintains its own data on the activity of the assets under its control. The data come from such sources as daily planning documents, watch logs, and daily operational briefing documents. Each suspect cocaine-smuggling event that takes place within each JIATF’s area of responsibility (whether or not the event resulted in a cocaine seizure) is identified by a unique JIATF-East or JIATF-West case identification number and each asset involved in the incident is identified by a specific call sign. The JIATF analysts examine the various data sources and determine which asset first detected the smuggling aircraft or vessel. It is up to the JIATF-East and JIATF-West analysts to reconcile their asset data with data from the CCDB in order to determine whether a specific ship, plane, or radar was the initial detection asset for a specific cocaine seizure. The CCDB contains a field for the detection asset that initially reported the drug-smuggling target. Data are entered into this field after the quarterly CCDB meeting, where each seizure (and other drug- smuggling events that did not result in seizures) is discussed in detail by representatives from the agencies involved in drug interdiction in the transit zone. The representatives must agree on which asset was the initial detection asset. The CCDB database manager informed us that this specific field only describes the type of asset that made the initial detection (e.g., a Coast Guard ship or a Customs surveillance aircraft), but not whether the asset was under the command of either JIATF-East or JIATF-West. Thus, it is up to the judgment of the JIATF analysts how to report the incident. A JIATF-West analyst told us that there might be situations where both JIATF-East and JIATF-West could each claim that an asset under its command made the initial detection that resulted in a subsequent cocaine seizure. According to the analyst, this could happen because some cocaine-smuggling events in the Eastern Pacific, particularly those involving maritime vessels, take place over a series of days or weeks, and may cross over the areas of responsibility of the two JIATFs. JIATF-East may make the initial detection, but then lose the target. JIATF-West may re-acquire the target a few days later, leading to the seizure of a load of cocaine. According to the analyst, it is unclear which command should be credited with the initial detection (and the subsequent seizure), and both JIATF-East and JIATF-West may report the detection (and the seizure) to DOD. Thus, both JIATFs would be reporting the same seizure to DOD, and DOD seizure totals would be inflated. The JIATF-West analyst did not provide evidence that such reporting had occurred. However, a DOD official agreed that, given the nature of the DOD reporting requirements, such multiple reporting could take place. In addition to the above, Tom Jessor, Jessica Lucas, Hiroshi Ishikawa, Carolyn Ikeda, David Alexander, Michael Curro, Christine Davis, Allen Fleener, Jared Hermalin, Barbara Johnson, and Judy Pagano made key contributions to this report. Drug War: Observations on the U.S. International Drug Control Strategy (GAO/T-NSIAD-95-182, June 27, 1995). Drug War: Observations on U.S. International Drug Control Efforts (GAO/T-NSIAD-95-194, Aug. 1, 1995). Drug Control: U.S. Interdiction Efforts in the Caribbean Decline (GAO/NSIAD-96-119, Apr. 17, 1996). Drug Control: Observations on U.S. Interdiction in the Caribbean (GAO/T-NSIAD-96-171, May 23, 1996). Customs Service: Drug Interdiction Efforts (GAO/GGD-96-189BR, Sept. 26, 1996). Drug Control: Long-Standing Problems Hinder U.S. International Efforts (GAO/NSIAD-97-75, Feb. 27, 1997). Drug Control: Observations on Elements of the Federal Drug Control Strategy (GAO/GGD-97-42, Mar. 14, 1997). Drug Control: Reauthorization of the Office of National Drug Control Policy (GAO/T-GGD-97-97, May 1, 1997). Drug Control: Update on U.S. Interdiction Efforts in the Caribbean and Eastern Pacific (GAO/NSIAD-98-30, Oct. 15, 1997). Drug Control: Status of U.S. International Counternarcotics Activities (GAO/T-NSIAD-98-116, Mar. 12, 1998). Drug Control: An Overview of U.S. Counterdrug Intelligence Activities (GAO/NSIAD-98-142, June 25, 1998). Customs Service: Aviation Program Missions, Resources, and Performance Measures (GAO/GGD-98-186, Sept. 9, 1998). Drug Control: Observations on U.S. Counterdrug Activities (GAO/T-NSIAD-98-249, Sept. 16, 1998). DOD Counterdrug Activities: Reported Costs Do Not Reflect Extent of DOD’s Support (GAO/NSIAD-98-231, Sept. 23, 1998). Coast Guard: Key Budget Issues for Fiscal Years 1999 and 2000 (GAO/T-RCED-99-83, Feb. 11, 1999). Drug Control: ONDCP Efforts to Manage the National Drug Control Budget (GAO/GGD-99-80, May 14, 1999). Drug Control: Assets DOD Contributes to Reducing the Illegal Drug Supply Have Declined (GAO/NSIAD-00-9, Dec. 21, 1999). Drug Control: DOD Allocates Fewer Assets to Drug Control Efforts (GAO/T-NSIAD-00-77, Jan. 27, 2000). Drug Control: U.S. Efforts in Latin America and the Caribbean (GAO/NSIAD-00-90R, Feb. 18, 2000). Drug Control: International Counterdrug Sites Being Developed (GAO-01-63BR, Dec. 28, 2000). | The Defense Department (DOD), the Coast Guard, and the Customs Service all interdict illegal drugs--primarily cocaine--from South America. DOD is the lead agency, but all three agencies play a role in monitoring and detecting shipments of illegal drugs. The Coast Guard is the lead agency for apprehending ships that are smuggling drugs, with Customs providing help as needed. The Coast Guard and Customs share responsibility for apprehending aircraft involved in drug-smuggling. GAO could not identify the funds obligated and the number of flight hours and ship days used for drug interdiction in the drug transit zone because the three agencies do not routinely track this information. The results tracked by the three agencies to demonstrate their effectiveness of their drug interdiction efforts in the transit zone varied according to whether they focused on drug seizures or results of detection and monitoring and whether they were specific to the transit zone. Agencies can use several controls to ensure the accuracy of their own cocaine seizure data, such as assigning unique identification numbers to each seizure and headquarters review of data from field units. Although two interagency data systems have been developed to ensure the accuracy of governmentwide cocaine seizure data when multiple agencies participate in a seizure, the two systems do not prevent agencies from counting cocaine seizures in their own databases and annual counts when more than one agency participates in the seizure. |
The GS classification system is a mechanism for organizing work, notably for the purposes of determining pay, based on a position’s duties, responsibilities, and qualification requirements, among other things. The GS system was created by the Classification Act of 1949 and was later codified in Title 5 of the U.S. Code. The origins of the GS system can be traced to the 1880s when merit replaced political patronage as the method of filling federal jobs. Classification was seen as a necessary first step in the equitable treatment of applicants and employees. A guiding principle of the GS classification system is that employees should earn equal pay for substantially equal work. The classification system aligns positions with rates of base pay by establishing a standardized schedule, which OPM administers.classification system is the foundation of many other human capital management policies, as shown in figure 1. Our analysis of subject matter specialists’ comments, related literature, and interviews with OPM officials identified a number of important characteristics for a modern, effective classification system, which we consolidated into eight key attributes. While each attribute is important individually, the inherent tensions between some will challenge OPM, policymakers, and stakeholders to find the optimal balance points so that all of the attributes will contribute to an effective system when assembled collectively. Importantly, the weight that policymakers and stakeholders assign to each attribute—and the trade-offs made among competing attributes—are important in evaluating alternative classification designs like those found in demonstration projects or the Partnership for Public Service’s recent model. Moreover, while the attributes listed below were frequently cited by subject matter specialists and the literature we examined, there was no consensus in priority or in degree of these attributes. Subject matter specialists agreed that any changes to the classification system should align with the guiding principle of equal pay for work of substantially equal value. The eight attributes of a modern, effective classification system are as follows: Internal equity. All employees with comparable qualifications and responsibilities for their respective occupations are assigned the same grade level. External equity. All employees with comparable qualifications and responsibilities are assigned grade levels and corresponding pay ranges comparable to the nonfederal sector. Transparency. A comprehensible and predictable system that employees, management, and taxpayers can understand. Flexibility. The ease and ability to modify the system to meet agency- specific needs and mission requirements, including modifying rates of pay for certain occupations to attract a qualified workforce, within the framework of a uniform government-wide system. Adaptability. The ease and ability to conduct a periodic, fundamental review of the entire classification system that enables the system to evolve as the workforce and workplace changes. Simplicity. A system that enables interagency mobility and comparisons with a rational number of occupations and clear career ladders with meaningful differences in skills and performance, as well as a system that can be cost-effectively maintained and managed. Rank-in-position. A classification of positions based on mission needs and then hiring individuals with those qualifications. Rank-in-person. A classification of employees based on their unique skills and abilities. The values policymakers and stakeholders emphasize could have large implications for pay, the ability to recruit and retain mission critical employees, and other aspects of personnel management. This is one reason why—despite past proposals—changes to the current system have been few, as finding the optimal mix of attributes that is acceptable to all stakeholders is difficult. For example, on the one hand a rank-in- person system classifies individuals based on individual qualifications such as performance, education, and seniority. This approach is used by the military and Senior Executive Service. On the other hand, a rank-in- position system classifies positions based on factors such as the duties, responsibilities, and qualifications the position requires and is widely used across the federal government. The extent to which the design and implementation of the GS classification system balances the attributes of a modern, effective classification system varies. There are two main design features of the GS system: (1) a set of standardized occupations, and (2) statutorily defined grade levels and steps. We found, in concept, these features incorporate several of the key attributes including internal and external equity, transparency, simplicity, and rank-in-position. However, as agencies implement the GS system the attributes of transparency, internal equity, simplicity, flexibility, and adaptability are reduced. This occurs, in part, because as discussed earlier in this report, some attributes are at odds with one another, so fully achieving one attribute comes at the expense of another. OPM publishes and defines a set of occupational standards that describe and differentiate all of the different types of work performed across the government, which agencies then use to develop position descriptions. Providing standard government-wide occupational standards is an example of how transparency and internal equity are built into the system. For example, the occupational standard for an information technology specialist clearly describes the routine duties and tasks, and experience required for the position. This information is published for all of the 420 occupations defined in the GS system, so all agencies are using the same, consistent standards when writing position descriptions. At the same time, any occupation with the same education, experience, and other requirements should be assigned the same grade level and same base pay range, contributing to internal equity. The GS system defines occupations narrowly, meaning that different occupational definitions may exist even for closely related occupations, like electrical engineers and electronics engineers. These two occupations both require similar backgrounds in understanding the theories of advanced mathematics, economics, and computer science. However, in application, electrical engineers tend to concentrate on the electrical systems of physical infrastructure, among other areas, while electronics engineers tend to concentrate on the electrical systems of devices such as satellites and communication systems. The precisely defined occupational standards can also enable comparisons to those occupations in the private sector, providing some level of external equity. However, in practice having numerous, narrowly-defined occupational standards may actually inhibit the system’s ability to optimize these attributes for reasons including the following: Classifying occupations and developing position descriptions in the GS system requires officials to maintain an understanding of the potential responsibilities of the individual position and of the nuances between similar occupational definitions. Without this understanding, having numerous occupations from which to choose may inhibit transparency and internal equity. For example, one subject matter specialist said that the requirements of a particular position may be met by the qualifications of more than one occupational definition. As a result, officials may not be classifying positions consistently, comparable employees may not be treated equitably, and the system may seem unpredictable. Having many—more than 400—occupations can limit the simplicity of the system. For example, since individual occupations may have their own career ladders or a set number of grades for potential advancement, it can be challenging for agencies to move employees according to their core skills within and across agencies to address evolving needs. Likewise, qualified employees may be limited in their ability to advance in their general fields as related but distinctly defined occupations may require specific experiences in that occupation. Interdisciplinary occupations—those that involve duties and responsibilities closely related to more than one professional occupational standard, such as those in certain scientific research fields like natural resources management and biological sciences— provide some flexibility to agencies as they allow agencies to combine the work of multiple occupations. This is because the position could be classified in two or more occupational series and the employee with education and experience in either of two or more professions may be considered equally qualified to do the work. The final classification of the position is determined by the qualifications of the employee. However, interdisciplinary occupations decrease the simplicity of the system. This is because employees, management, and taxpayers may not be able to easily understand how one occupation differs from another, especially when position descriptions have overlapping responsibilities. Finally, a system composed of numerous occupational series can be cumbersome to systematically review, limiting the system’s adaptability. This is because reviewing and revising occupational series can be a time-consuming effort, partially due to analysis required to understand the differences between and potential effect of closely related occupations, ensure government-wide applicability and gain consensus. The second key design feature of the GS system is the 15 statutorily defined grade levels intended to distinguish the degrees of difficulty within an occupation, which are designed to simplify the system and provide internal equity. Agency officials assign a grade level to a position after analyzing the duties and responsibilities according to the factor evaluation system. This allows for easy comparisons of employees in the same occupation and grade level but in different agencies, providing simplicity and internal equity to the system, and it may help employees move across agencies. Within the 15 grades there are 10 steps— time-based increases that determine a GS employee’s rate of pay—providing transparency to the system by creating a clear, predictable process. This design feature emphasizes the GS system as a rank-in-position system that focuses on the position and the time spent in that position over the specific characteristics or performance of the incumbent, as a rank-in-person system would do. However, in practice the 15 grades and the 10 steps may actually inhibit the system’s ability to optimize these attributes for reasons such as the following: The 15 grades require officials to make meaningful distinctions between such things as the nature and extent of the skills necessary for the work at each level, which may be more difficult in some white- collar occupations where those differences may not be clear cut. For example, officials must be able to determine how the work of a GS-12 accountant is different from a GS-13 accountant. But making clear distinctions between these occupations may be more nuanced, as the basis for them hinges on, for example, how agency officials determine the degree of complexity of the work or the most important duties of the position. As a result, having 15 grade levels may make the system seem less transparent, as distinguishing between the levels may not be precisely measured by the elements of the factor evaluation criteria. Otherwise, agencies risk having two employees performing substantially equal work but receiving unequal pay, which decreases the degree to which the system can ensure internal equity. Further, having so many grades defined by statute, makes it hard to review and revise the grades, thereby limiting the adaptability of the GS system. As the nature of work and the workforce changes, the system is constrained, since some revisions to the system would require legislative action. As we concluded in our 2003 report, for example, “…today’s knowledge-based organizations’ jobs require a much broader array of tasks that may cross over the narrow and rigid boundaries of job classifications. The federal job classification process not only delays the hiring process, but more importantly, the resulting job classifications and related pay might not match the actual duties of the job. This mismatch can hamper efforts to fill the positions with the right employees.” To address some of the issues we found in our 2003 report, among other things we recommended that OPM “study how to simplify, streamline, and reform the classification process.” In response, OPM published a report which outlined several strategic principles to modernize the civil service system, while preserving the merit system principles. Over the years, agencies, either through the use of demonstration projects or congressionally authorized alternative personnel systems, have sought exceptions to the GS system to mitigate some of its limitations. Understanding the benefits and challenges of the design features tested in these alternatives can assist in understanding and evaluating options to improve the GS system. By using lessons learned from the alternative systems and results from prior studies of the GS system to examine ways to make the GS system more consistent with the attributes of a modern, effective classification system, OPM could better position itself to help ensure that the system is keeping pace with the government’s evolving requirements. Further, stakeholders like the CHCO Council, unions and others can provide insight into the design and implementation of these alternatives. According to the subject matter specialists and OPM and agency evaluations of alternative personnel systems, most of the alternative systems either retained GS occupations and grades but with higher pay rates to address difficulties in recruiting or retaining well-qualified employees or implemented a broad-banded approach to pay and classification. Broadband systems can provide fewer occupations and fewer grade levels which align with fewer, but broader ranges of pay for that occupation. Broader bands that combine groups of GS-equivalent occupations into larger occupational families may have broadly defined occupational definitions, increasing agency flexibility to use employees according to their skills and competencies—thereby embodying an element of rank-in- person. For example, in 1996, the Department of Defense authorized the Civilian Acquisition Workforce Personnel demonstration project to provide employees and management flexibility concerning work assignments. Employees in acquisition occupations with similar characteristics were grouped together into three career paths and assigned broader bands that provided both a broader range of pay and broader spectrum of duties. For example, employees were assigned to projects, tasks, or functions, but did not have their in-position descriptions changed. On the other hand, broader occupational definitions can provide less transparency to the specific skills required of a position and can make it more challenging to monitor internal equity—as employees in a single broad band may vary widely in their qualifications and responsibilities. In addition, broad-banded systems can combine multiple GS grade equivalents into a smaller range of bands, often between three and five. Just as the bands represent a broader range of grade levels, broad- banded systems often align with broader ranges of pay than the GS system. Broad ranges of grades and pay can enable greater external equity by giving agencies more latitude in matching market pay rates. In addition, having fewer broad bands can increase the simplicity of the system because broad bands do not require such a precise analysis of the degree of difficulty of an occupation. While fewer occupational bands are designed to create a more simple system, as implemented this may also decrease transparency because two employees in the same occupation may have a variety of different responsibilities, thus limiting cross-agency and government comparisons. While increasing agency flexibility to use employees according to their skills and competencies, in practice it may limit the transparency of the system, because employees and others (e.g., Congress and taxpayers) may be less certain of career-path options. Agencies with numerous, distinctly different occupations may not be able to combine occupations into a single occupational band, thereby limiting the simplicity of the system. The proportion of federal white-collar employees covered by alternative personnel systems increased from 6 percent in 1988 to 21 percent in 2013, as shown in figure 3. Some of the movement away from the GS system is a result of the implementation of several alternative personnel systems. For example, the Financial Institutions Reform, Recovery and Enforcement Act of 1989 (FIRREA) granted certain federal financial regulatory agencies, which had been taken out of the GS classification system, the flexibility to establish their own compensation systems. FIRREA allowed these agencies the flexibility to establish alternative systems, recognizing that the GS system could impede these agencies’ ability to recruit and retain employees critical to meeting their organizational missions. Additionally, Congress directed most of the financial regulatory agencies to seek to maintain pay comparability and to consult with each other to limit the degree to which agencies are competing with each other for employees. During this time, six agencies (the Departments of Commerce, Defense, Energy, Transportation, and Treasury, and the National Science Foundation) saw the proportion of employees in alternative personnel systems increase by at least 10 percentage points. In 2013, more than 140,000 employees, or about 9 percent of the entire federal white-collar workforce, worked in alternative personnel systems at one of these six agencies. Despite the increase in the use of alternative personnel systems in selected agencies, we found that most agencies had a mix of both—some employees were in the GS system and some were in alternative systems. The literature we reviewed on alternative personnel systems suggests that agencies’ alternative systems were designed with their own purpose and goals, and agencies moved to alternative classification systems to attempt to offer market-based pay rates, pay-for-performance, and certain other flexibilities in an attempt to be more competitive in a labor market. While 22 of the 24 Chief Financial Officers Act agencies had a majority of their employees in the GS system in 2013, we found that occupational families requiring employees with advanced degrees, and in particular occupations in science, technology, engineering, and math (STEM) fields and other technical areas were more likely to be in alterative systems than other types of occupational families. occupational families, the 6 with the largest increase from GS to an alternative system were mostly concentrated in STEM occupations, as shown in figure 4. The Chief Financial Officers Act agencies are the executive branch agencies listed at 31 U.S.C. § 901(b). The demonstration project covering veterinarians at the U.S. Department of Agriculture, Food Safety and Inspection Service was terminated in February 2014. In 1996, an alternative personnel system was applied to Federal Aviation Administration air traffic controllers at the Department of Transportation. While these occupations are not STEM-related, the alternative personnel system implemented accounts for the increase in 1996. We estimated that in 2013 employees in alternative personnel systems were paid about 10 percent more, on average, than GS employees in identical occupations when controlling for factors such as tenure, location, and education in the 90 occupations we considered.was a significant range among the occupations in the difference in pay between those in the GS and those in alternative systems—going both ways. For example, an employee in the medical officer occupational family in an alternative personnel system earned about 18 percent more than a similar employee working in the GS system. In a few cases, we found that employees working in the GS system were paid more than those working in alternative personnel systems after controlling for characteristics. Evaluating the current GS system and identifying whether changes to it are warranted should be informed by how effectively the system is currently administered. OPM is required by law to create and update occupational standards and oversee agencies’ implementation of the GS system.agencies through handbooks and policy manuals, training, individual meetings with agency officials, quarterly classification forums, and a dedicated e-mail address for specific questions and advice. The guidance To carry out these responsibilities, OPM provides guidance to to agencies includes, for example, Introduction to the Position Classification Standards, which provides basic definitions and a description of the principles and policies necessary to apply the classification standards, and The Classifier’s Handbook, which describes how to develop position descriptions and how to determine the occupational series and grade of the positions. OPM also offers classification training through its HR Solutions and HR University. These classes train human resource specialists on basic and advanced classification techniques and are offered in a number of different media: in-person and self-paced online courses. OPM officials said they also assist agencies with classification issues through OPM’s quarterly classification forum and direct inquiries. Officials said common topics discussed at the forums are related to updates to occupational standards and implementation challenges to classifying positions. OPM also uses these forums to update managers and human resource specialists about ongoing classification system-related projects. For example, at the March 2014 classifiers’ forum, which we observed, OPM officials discussed the establishment of a new occupational series and solicited feedback from the forum participants. Participants also discussed difficulties they faced in using outdated standards to classify positions in a rapidly changing environment, such as when a position requires knowledge of a new technology (e.g., the use of social media in a public affairs position). OPM is responsible for establishing new—and revising existing— occupational standards after consulting with agencies; however, OPM does not know the extent to which it is meeting the needs of agencies with regard to updating occupational standards. From 2003 to 2014, OPM established 14 new occupational standards (new occupations in the federal government), and revised almost 20 percent of the occupational standards. However, there was no published review or update of 124 occupations, roughly 30 percent of the total GS system occupations, since 1990. For example, the air traffic controller occupational standard has not been updated since June 1978 and the food safety inspector occupational standard has not been updated since June 1971. OPM officials said that they first address occupations identified in presidential memorandums. Three of the reviews from 2003 to 2014 were in response to a presidential memorandum. For example, in 2013, OPM established a formal records management occupation to define the roles, responsibilities, and skill sets for agency records management specialists to comply with the Presidential Memorandum on Managing Government Records. However, OPM does not systematically track and prioritize the remaining occupational standards. OPM officials told us the other occupational standards that they either created or updated were in response to working with agencies or other stakeholders to determine government-wide or specific agency needs and analysis of occupational trends. OPM officials were unable to provide us with documentation of this prioritization criteria. Further, OPM officials could not provide the near- or long-term prioritization for reviewing occupations. OPM officials said that they do not track all of the agency requests they receive because, in some cases, an agency requested an occupational review from OPM, but after speaking with the agency and evaluating the need for a review, OPM officials determined that no review was necessary. OPM officials said at times they conduct a study to establish an occupation, but find the work is appropriately addressed by an existing occupation and that a new occupation is not warranted. For example, OPM officials said they studied whether a new occupation was needed for positions related to implementing the GPRA Modernization Act of 2010. However, OPM officials said they determined that the position responsibilities fell under the current duties of management and program analysis occupations. Because OPM does not systematically track all agency requests it receives and its subsequent decision on whether or not to review occupational standards, it is unclear whether OPM’s decisions are meeting agency needs. In our previous work looking at strategies that help agencies meet their missions, CHCOs said if the position description and job announcements are based on outdated standards, they are less likely to reflect the specific skills needed, making it challenging for agencies to recruit and hire the right individuals. OPM officials also said that updating the standard for a specific occupation is a resource-intensive process that often takes 6 months to a year to complete. Officials also said reviewing an occupational family, which includes a number of individual occupations, can take multiple years to complete. Further, officials said that some occupations are more dynamic than others and may need to be reviewed more frequently. For example, an emerging occupation or one in the information technology field may need to be updated more frequently as the nature of the work is clarified or changes. OPM does not know if it is keeping pace with agencies’ needs to meet the evolving nature of government work. Without a more strategic approach for systemically tracking and prioritizing updates to occupational standards, especially for more dynamic and emerging occupations, OPM does not have reasonable assurance that it is fulfilling its responsibilities to establish new or revise existing occupational standards based on the highest priorities. OPM has not reviewed agency classification programs since the 1980s. Therefore, OPM is not in the best position to know how well or how consistently agencies are complying with classification standards, policy, or guidance. OPM is required by law to review “from time to time” a number of positions in each federal agency to determine whether the agency is correctly placing positions in classes and grades according to OPM-published standards. While the law does not indicate a minimum number of positions and occupations that OPM should review, nor specific time frames for review, OPM said it is not currently conducting this oversight at any federal agency. OPM is also authorized to revoke or suspend an agency’s classification authority if OPM finds that an agency is not following classification guidance; however, it has not done so in more than two decades. OPM officials said the agency stopped conducting oversight reviews in the 1980s because OPM determined that the reviews were ineffective at overseeing agency compliance with the occupational standards. Specifically, officials said the reviews were time consuming and agencies did not agree with how OPM selected the position descriptions to review. OPM officials said agencies frequently contested the results of the reviews leading to another time- and resource-intensive review process for both OPM and the agencies. Further, OPM officials said revoking an agency’s classification authority requires OPM to provide classification support to the agencies, another time- and resource-intensive process. OPM officials said in 2014 they had 6 full-time classification policy specialists tasked with maintaining the classification standards, compared to 16 in 2001, and many more in the 1980s. OPM officials said that lower staffing levels limit the agency’s ability to perform oversight. However, OPM, like all agencies, must make tradeoffs between competing demands with its limited resources during an era of constrained resources. According to OPM officials, the reduction in employees with classification experience has declined government-wide. In our 2013 high-risk update, we found that an OPM-led working group identified the human resources specialist series—which includes classifiers—as a mission-critical skills gap.resources specialist series can be traced back to the mid-1990s. The decrease in classifiers within the human OPM officials said they rely on agencies’ internal oversight programs to ensure proper application of the classification policies. However, OPM officials told us they do not review agency oversight efforts to ensure consistency, nor do they know which agencies, if any, have robust internal oversight mechanisms. Agencies are responsible for classifying positions consistent with OPM occupational standards and guidance. According to OPM officials, oversight functions for classification vary by agency. OPM officials told us that employees have the right to appeal classification decisions regarding their position if they believe that their position has not been correctly classified by an agency. OPM officials said the appeals published in the Digest of Significant Classification Decisions and Opinions provide interpretative guidance to agencies to assist them in applying standards. According to OPM officials, employees commonly appealed the assigned grade level of a position when the job duties described in a position description were not consistent with classification standards. OPM officials said that they may require agencies to conduct classification consistency reviews as a result of a classification appeals decision. While reviewing classification appeals can give OPM a sense of an agency’s ability to classify individual positions, it does not address OPM’s responsibility to oversee the classification process. Without a strategic approach to oversight, OPM has limited assurance that agencies are correctly classifying positions according to the standards. This may be especially important as the number of occupations and agencies moving to alternative systems continue to increase. The GS system was designed to uphold the key merit system principle of equal pay for work of substantially equal value and other important goals. However, our work and that of other organizations have shown how the GS system has not kept pace with the government’s evolving requirements. Indeed, federal agencies have taken on additional roles and responsibilities, the missions they face have become increasingly complex, and the employees they need must possess a range of expertise and skills. While there is no one right answer or single way to design a classification system, the eight attributes of a modern, effective classification system that we identified—internal equity, external equity, transparency, flexibility, adaptability, simplicity, rank-in-position, and rank-in-person— provide policymakers and stakeholders the criteria to assess the many proposed options and alternatives. Collectively they provide a useful framework for informing discussions of whether refinements to the current system or wholesale reforms are needed. Indeed, the value placed on each of the attributes and how they are optimized will largely drive the design of any approach to classification. Going forward, OPM could improve its management and oversight of the GS system. OPM, like all agencies, must consider cost-effective ways to fulfill its responsibilities in an era of constrained resources. Using a more strategic approach to track and prioritize reviews of occupational standards—that perhaps better reflect more evolving occupations—could help OPM better meet agencies’ evolving needs and the changing nature of government work. Further, a strategic approach to oversight could help OPM better fulfill its responsibility to ensure agencies are correctly implementing the classification process. To improve the classification system and to strengthen OPM’s management and oversight, we recommend that the Director of OPM take the following three actions: Working through the CHCO Council, and in conjunction with key stakeholders such as the Office of Management and Budget, unions, and others, should use prior studies and lessons learned from demonstration projects and alternative systems to examine ways to make the GS system’s design and implementation more consistent with the attributes of a modern, effective classification system. To the extent warranted, develop a legislative proposal for congressional consideration. Develop cost-effective mechanisms to oversee agency implementation of the classification system as required by law. Develop a strategy to systematically track and prioritize updates to occupational standards. Develop a strategy that will enable OPM to more effectively and routinely monitor agencies’ implementation of classification standards. We provided a draft of this product to the Director of OPM for comment. In written comments, which are reprinted in appendix III, OPM partially concurred with two of the three recommendations and did not concur with one. OPM also provided technical comments on our draft report, which we incorporated as appropriate. OPM stated that it partially concurred with our recommendation to work with key stakeholders to use prior studies and lessons learned to examine ways to make the GS more consistent with the attributes of a modern, effective classification system. OPM agreed that the system needs reform but OPM noted several efforts to assist agencies with classification issues, including its interagency classification policy forum and partnering with agencies to address challenges related to specific occupational areas. While these examples of assisting agencies to better implement the GS on a case-by-case basis are helpful, they are not fully addressing the fundamental challenges facing the GS system, which we and others have said is not meeting the needs of federal agencies. For example, as noted in this report, at the March 2014 interagency classification forum that we observed, OPM provided status reports on classification projects such as its study on pay equity and closing critical skills gaps. OPM also discussed its new procedures for collecting agency comments during occupational reviews. OPM stated that the studies and lessons learned of alternative personnel systems and demonstration projects focused on pay rather than classification. However, as we noted in the report, classification and pay are closely related, and we continue to believe that the lessons learned from these efforts should be used to examine ways to make the GS system more consistent with the attributes of a modern, effective classification system. OPM also discussed its new procedures for collecting agency comments during occupational reviews. We are encouraged at OPM’s plan to leverage partnerships with key stakeholders to inform future strategies and action plans, and continue to recommend that OPM uses these efforts examine ways to make the design and implementation of the GS system more in line with the attributes of a modern, effective classification system. OPM stated that it did not concur with our recommendation to develop a strategy to systematically track and prioritize updates to occupational standards. Specifically, OPM noted that occupational standards are updated in response to a systematic, prioritized process informed by working with agencies and other stakeholders and analysis of occupational trends. However, OPM officials were unable to provide us with the documentation of their efforts. As noted in our report, OPM has not published a review or update of 124 occupations, roughly 30 percent of the total number of occupations on the GS system, since 1990. Further, OPM officials could not provide the near- or long-term prioritization of occupations schedule for review. As a result, OPM cannot demonstrate whether it is keeping pace with agencies’ needs nor does it have reasonable assurance that it is fulfilling its responsibilities to establish new, or revise existing occupational standards based on the highest priorities. We continue to believe that OPM should take action to fully address this recommendation. OPM stated that it partially concurred with our recommendation to develop a strategy to more effectively and routinely monitor agencies’ implementation of classification standards. OPM stated that it will continue to leverage the classification appeals program to provide interpretative guidance to agencies to assist them in classifying positions. OPM also stated it will direct consistency reviews as appropriate, however as we note in the report, OPM does not review agencies’ internal oversight efforts. We are encouraged to see that OPM stated it will look for opportunities to further expand their monitoring and oversight activities and we will continue to monitor OPM’s efforts in that regard. However, OPM did not state whether it would develop a strategy to assist it in doing so as we recommended. We continue to believe that OPM should develop a strategy to fully address the recommendation and we will continue to monitor OPM’s efforts in that regard. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Committee on Oversight and Government Reform. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Given the changes in the federal workforce and the ongoing attention to federal civil service skills gaps, there has been a growing interest in reexamining the federal classification system to ensure agencies are equipped with the tools to maintain or acquire the skills and talent needed. Our objectives were to assess (1) the attributes of a modern, effective classification system and how the General Schedule (GS) classification system compares with the modern systems’ attributes; (2) the trends in agencies and occupations covered under the GS system and the pay difference for selected alternative systems; and (3) the Office of Personnel Management’s (OPM) administration and oversight of the GS system. To assess the attributes of a modern, effective classification system, we held discussion groups with subject matter experts and conducted a literature review. Our discussion groups included over 25 subject matter specialists. We selected these subject matter specialists because they represented various perspectives on and experiences with federal classification. Specifically, we conducted sessions covering the following areas: (1) the public policy arena, with representatives such as the Partnership for Public Service, Project on Government Oversight, American Enterprise Institute, Booz Allen Hamilton, the Federal Salary Council, Human Resources Research Organization, and American Society for Public Administration; (2) federal employee organizations, with representatives such as the American Federation of Government Employees, National Treasury Employees Union, International Federation of Professionals and Technical Engineers, and the National Federation of Federal Employees; (3) academia with representatives from American University and Rutgers University; (4) an official from the Federal Managers Association; (5) former employees of OPM, including high- ranking officials and prior OPM directors; and (6) officials who were formerly in personnel positions at federal agencies that have employees on alternative personnel systems, such as the Departments of Defense, Energy, and Homeland Security. We collaboratively analyzed major themes through repeated review and discussion of detailed notes of the discussion groups to identify the attributes of a modern, effective classification system along with other themes for consideration. We provided the subject matter experts and OPM officials the opportunity to comment on the attributes, and modified the attributes or definitions as appropriate. For additional perspectives on the attributes of a modern, effective system we reviewed relevant literature on the GS system published from 2000 to 2014 from OPM, academic journals, and public policy organizations. In addition, we reviewed relevant literature on selected alternative personnel systems, applicable federal laws, and statutes pertaining to classification, and OPM’s classification guidance, such as The Classifier’s Handbook and The Introduction to the Position Classification Standards. Because OPM is not responsible for the oversight of alternative personnel systems, it does not have a listing of all alternative personnel systems. Therefore, the universe of alternative personnel systems is unknown and our analysis did not attempt to catalogue all of the alternative personnel systems. When we refer to alternative personnel systems in this section, we refer to systems that are broader than alternative pay plans that we analyze in objective 2. We reviewed literature on several alternative personnel systems and demonstration projects, such as the Department of Commerce’s Alternative Personnel System and National Institute of Standards and Technology; the Department of Defense Science and Technology Reinvention Laboratory demonstration project, Civilian Acquisition Workforce Personnel demonstration project, and Naval Demonstration project at China Lake, and the Department of Energy’s National Nuclear Security Administration. We used this information, along with information from our discussion groups and the literature review, to compare the design features of the GS classification system and a notional alternative personnel system to determine the extent to which the GS system balances the attributes of a modern, effective classification system. To assess the trends in agencies and occupations covered under the GS system and the pay difference for alternative systems, we analyzed personnel data from OPM’s Enterprise Human Resources Integration (EHRI) Statistical Data Mart for fiscal years 1988 through 2013. Our analysis included nonsenior executive, white-collar occupations in the 24 Chief Financial Officers (CFO) Act agencies, which represent the major departments, such as the Department of Defense and most of the executive branch workforce. Our trend analysis begins with fiscal year 1988 because it was the first year for which data were available, and ends with 2013 because it was the most recent, complete fiscal year of data available during our review. To determine trends in the proportion of the federal nonexecutive, white collar workforce covered by alternative personnel systems, we analyzed for each fiscal year the proportion of those employees—government-wide and within CFO Act agencies and occupational families—covered under general schedule pay plans (GS, GL, GM) compared to all other pay plans. To analyze the pay differences between employees on the GS system and employees on alternative personnel systems, we performed a multivariate regression analysis on EHRI data for fiscal year 2013. Consistent with standard practice in studies of the determinants of earnings, we attempted to explain the differences by predicting the logarithm of annual adjusted pay on characteristics of federal workers. In the regression, we controlled for employees’ years of federal experience, age, education, type of service (competitive or excepted), type of appointment (permanent or nonpermanent), veterans’ preference, schedule (full or part-time), and pay basis (hourly, annually). We also included a set of indicator variables for agency of employment, occupational series, and geography. By including these indicator variables, we controlled for the possibility that agencies might have higher pay rates and occupations might earn different rates of pay regardless of whether they were in the GS or an alternative system. We estimated this model for occupations with at least 2.5 percent employee representation in both systems, and which contained at least 0.125 percent of the federal government, or about 90 of more than 400 possible occupations. We assessed the reliability of the EHRI data through electronic testing to identify missing data, out of range values, and logical inconsistencies. We also interviewed OPM officials about our use of the data and reviewed our prior work assessing the reliability of these data. On the basis of this assessment, we believe the EHRI data we used are sufficiently reliable for the purpose of this report. To assess how OPM administers the GS classification system and oversees agency implementation of the classification standards, we reviewed relevant statutes, agency policies and guidance, and interviewed OPM officials. Specifically, we reviewed Title 5, Chapter 51 of the U.S. Code, which establishes the role of OPM and the agencies in oversight of the GS classification system, among other things. reviewed OPM’s guidance to agencies on how to classify positions and determine the proper grade, title, and category in which to place the positions. This guidance was included in documents such as The Introduction to the Position Classification Standards, The Classifier’s Handbook, Handbook of Occupational Groups and Families, Qualification Standards, and the Digests of Significant Classification Decisions and Opinions. In addition, we conducted interviews with relevant OPM officials in the offices of Merit System Accountability and Compliance, and Employee Services to determine the actions they have taken to oversee agencies’ implementation of the classification system, and we compared these actions to legislation outlining OPM’s responsibilities. To understand the administration and oversight issues agencies encounter with regard to the classification system, we observed OPM’s quarterly classification forum with human capital specialists and attended the Classification Refresher Training. We conducted this performance audit from May 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 5 U.S.C §§ 5101-5115. Appendix II: Interactive Graphic Information Text on Graphic General Schedule Classification System The GS classification system assigns jobs to specific occupational groups, series, and rates of pay. The GS system also influences other human capital management practices and policies beyond the classification process. Rollover Text on Graphic The GS classification system contains grade levels, which determine base pay rate. This, in turn, establishes a cap on the amount of recruitment, retention, and relocation incentives. Recruitment. An agency may pay a recruitment incentive to a newly-hired employee if it has determined the position will be difficult to fill without an incentive. A recruitment- incentive payment generally may not exceed 25 percent of the employee’s base pay rate. Retention. An agency may pay a current employee a retention incentive if it determines that an employee possesses unusually high or unique qualifications, or if its special need for the employee’s services makes it essential to retain that employee. In such cases, the employee would be likely to leave the agency without an incentive. Retention incentives may also apply to a group or category of employees. Total retention bonuses generally may not exceed 25 percent of the base pay for an individual employee or 10 percent for a group or category of employees. Relocation relocate—permanently or temporarily—to accept a position if it determines that position will be difficult to fill without the incentive. Total relocation-incentive payments generally may not exceed 25 percent of the employee’s annual base pay. . An agency may pay a relocation incentive to a current employee who must A position’s classification includes a grade level, which—together with the step level— corresponds to base pay rate. These base pay rates: determine the amount of some payments (e.g., within grade, quality-step, and salary increases resulting from promotion); define the dollar value of certain personnel actions (e.g., overtime and severance pay); are the base from which percentages are calculated for a variety of allowances (e.g., recruitment, retention, and relocation incentives); and are factors in formulas used to calculate various pay entitlements (e.g., retirement benefits). A position’s classification influences the training and developmental opportunities available to an employee (e.g., classroom training, rotation to another agency or program). Training needs are the difference between the competencies required to perform the job and an employee’s current capability. Managers use the position classification to establish performance expectations. The GS classification system informs the standards used to evaluate an employee’s performance via position descriptions, which identify the employee’s responsibilities and manager’s expectations. Performance rewards and corrective actions are also linked to GS. For example, superior performance can lead to a quality step increase, while poor performance can result in a demotion to a lower grade. Also, base pay rates serve to cap performance-based awards which may not exceed 20 percent of base pay. The GS classification system establishes a road map for employees and determines how far they may advance in the same position as long as their performance is satisfactory. A promoted employee must be placed at the step of the higher grade that represents at least a two-step increase from the employee’s previous grade. The system automatically delivers the two-grade rise and accompanying pay raise. The GS system sets promotion expectations for both the employee and the supervisor. Federal regulations require an employee to spend 52 weeks in a grade before becoming eligible for advancement resulting in a higher grade or higher rate of basic pay. Rollover Text on Graphic The GS classification system informs future planning and budgeting for salaries and expenses. Because a position’s grade corresponds to an already established base pay rate, a budget manager can forecast the number and types of positions that can be filled or terminated should the budget environment change. The GS classification system is a work-management tool that allows agencies to align their work with their human resources to perform the work. The agencies use the GS system to identify the mix of occupations and skills needed to fulfill their mission goals, conduct analyses to identify and close skills gaps, develop strategies to address human capital needs, and ensure that the agency is appropriately structured. Since 1949, the GS classification system has been used for additional human resources management practices. For example: GS is an unofficial system that equates military rank with civilian grade, screening systems for certain employee benefits, and deciding who gets prime office space. The Federal Deposit Insurance Corporation uses GS classification standards to establish occupational groups and grades for its positions, but sets pay based on market surveys and negotiations with unions. The Central Intelligence Agency classifies most of its positions using the GS structure and a modified evaluation system to determine grades. Robert Goldenkoff, (202) 512-2757 or goldenkoffr@gao.gov. In addition to the individual named above, Chelsa Gurkin, Assistant Director; Trina Lewis, Assistant Director; Robyn Trotter, Analyst-in- Charge; Ulyana Panchishin; Jeffrey Schmerling; Ben Bolitzer; Jehan Chase; Sara Daleski; Karin Fangman; Steven Putansu; Robert Robinson; Rebecca Shea; and Stewart Small made major contributions to this report. | Almost since its inception in 1949, questions have been raised about the ability of the GS system—the federal government's classification system for defining and organizing federal positions—to keep pace with the evolving nature of government work. GAO was asked to review the GS classification system. This report examined: (1) the attributes of a modern, effective classification system and how the GS system compares with the modern systems' attributes; (2) trends in agencies and occupations covered by the GS system and the pay difference for selected alternative systems; and (3) OPM's administration and oversight of the GS system. GAO analyzed personnel data from 1988 to 2013, conducted a literature review, compared legislation to OPM procedures, and interviewed subject matter specialists and OPM officials, selected to represent public policy groups, government employee unions, and academia, among others. GAO's analysis of subject matter specialists' comments, related literature, and interviews with Office of Personnel Management (OPM) officials identified a number of important characteristics for a modern, effective classification system, which GAO consolidated into eight key attributes (see table below). GAO's analysis shows that in concept the current General Schedule (GS) classification system's design incorporates several key attributes including internal and external equity, transparency, simplicity, and rank in position. However, as OPM implemented the system, the attributes of transparency, internal equity, simplicity, flexibility, and adaptability are reduced. This occurs, in part, because some attributes are at odds with one another so fully achieving one attribute comes at the expense of another. Thus, OPM, working with its stakeholders, is challenged to determine how best to optimize each attribute. While the GS system's standardized set of 420 occupations, grouped in 23 occupational familes, and statutorily-defined 15 grade level system incorporates several key attributes, it falls short in implementation. For example, the occupational standard for an information technology specialist clearly describes the routine duties, tasks, and experience required for the position. This kind of information is published for the 420 occupations, so all agencies are using the same, consistent standards when classifying positions—embodying the attributes of transparency and internal equity. However, in implementation, having numerous, narrowly-defined occupational standards inhibits the system's ability to optimize these attributes. Specifically, classifying occupations and developing position descriptions in the GS system requires officials to maintain an understanding of the individual position and the nuances between similar occupations. Without this understanding, the transparency and internal equity of the system may be inhibited, as agency officials may not be classifying positions consistently, comparable employees may not be treated equitably, and the system may seem unpredictable. Several studies have concluded that the GS system was not meeting the needs of the modern federal workforce or supporting agency missions, and some studies suggested reductions in the number of occupational series and grade levels to help simplify the system. In addition, over the years agencies have sought exceptions to the GS system to mitigate some of its limitations either through demonstration projects or congressionally-authorized alternative personnel systems—often featuring a broadband approach that provided fewer, broader occupational groups and grade levels. By using lessons learned and the results from prior studies to examine ways to make the GS system more consistent with the attributes of a modern, effective classification system, OPM could better position itself to help ensure that the system is keeping pace with the government's evolving requirements. The proportion of federal employees covered under alternative personnel systems increased from 6 percent to 21 percent of the white-collar workforce from 1988 to 2013. Occupational families (i.e., groups of occupations based upon work performed) in the science, technology, engineering, and math (STEM) fields are more prevalent in alternative systems. Of the GS system's 23 occupational families, the 6 with the largest increase from GS to an alternative system were mostly concentrated in STEM occupations (See figure below). GAO estimated that, in 2013, employees in alternative systems were paid about 10 percent more, on average, than GS employees in identical occupations when controlling for factors such as tenure, location, and education in the 90 occupations GAO considered. OPM is responsible for establishing new—and revising existing—occupational standards after consulting with agencies. From 2003 to 2014, OPM established 14 new occupational standards and revised almost 20 percent of the occupational standards. However, there was no published review or update of 124 occupations since 1990. OPM officials said they first review occupations identified in presidential memorandums as needing review; however OPM does not systemically track and prioritize the remaining occupational standards for review. Therefore, OPM has limited assurance that it is updating the highest priority occupations. Further, OPM is required by law to oversee agencies' implementation of the GS system. However, OPM officials said OPM has not reviewed any agency's classification program since the 1980s because OPM leadership at the time concluded the reviews were ineffective and time consuming. As a result, OPM has limited assurance that agencies are correctly classifying positions according to standards. GAO recommends that the Director of OPM (1) work with stakeholders to examine ways to modernize the classification system, (2) develop a strategy to track and prioritize occupations for review and updates, and (3) develop cost-effective methods to ensure agencies are classifying correctly. OPM partially concurred with the first and third recommendation but did not concur with the second recommendation. OPM stated it already tracks and prioritizes occupations for updates. However, OPM did not provide documentation of its actions. GAO maintains that OPM should implement this action. |
The loss of control of sealed radiological sources can arise from their abandonment, misplacement, or theft. In such cases, there is a risk of either the inadvertent or intentional malevolent human exposure to radioactive materials in these sources. Figure 1 shows a graphic representation of the ways in which the loss of control of sealed radiological sources can occur. Since September 11, 2001, international and U.S. agencies have taken additional steps to increase the safety and security of radioactive materials, particularly sealed radiological sources. Between 2002 and 2003, the International Atomic Energy Agency (IAEA) held various meetings and conferences to discuss how the agency’s Code of Conduct on the Safety and Security of Radioactive Sources might be revised in light of new security concerns. One result of these gatherings was the development of a categorization scheme for sealed radiological sources in terms of the potential risks associated with their malevolent uses. The first three of the five source categories identified by IAEA, which are considered to pose the most significant risk to individuals, society, and the environment, are listed in an annex to the Code of Conduct. The Code of Conduct recommends that IAEA member states establish a national registry that tracks, at a minimum, the first two source categories. Table 1 contains a listing of the radionuclides and their curie levels that are presented in the IAEA Code of Conduct. In May 2003, a DOE/NRC interagency working group—which was formed to address security concerns over the radioactive materials that could be used in a radiological dispersal device—issued a report that, among other things, recommended that actions be taken to develop a national threat policy based on vulnerability assessments, a national source tracking system, and an integrated national strategy for disposing of unsecured sealed radiological sources. Following this DOE/NRC report, NRC adopted the nonlegally binding IAEA Code of Conduct as a basis for (1) determining which licensees may need additional protective measures for the sealed radiological sources in their possession and (2) defining the scope of a national source tracking system. NRC found that the curie thresholds for radionuclides in the sources identified by the DOE/NRC interagency working group were similar enough to the Code of Conduct categories to warrant adoption of the IAEA source categorization scheme to better align domestic and international efforts to increase the safety and security of sealed radiological sources. NRC and DOE have since engaged in separate efforts to (1) assess the vulnerability of facilities that contain sealed radiological sources within their jurisdictions, (2) promulgate new security measures, and (3) begin systematically tracking some of these sources. According to NRC officials, NRC has been working with the Agreement States since January 2002, and with licensees since September 2002, using a risk informed approach to enhance the regulatory requirements applicable to high-risk radioactive material. In June 2003 and January 2004, NRC issued its first set of protective measures to large irradiators and device manufacturers and distributors, respectively. In January 2004, NRC and the Agreement States began to consider the need for additional protective measures for other licensees. This process has involved several iterations of vulnerability assessments of licensee sites that have devices or use applications containing IAEA categories 1 and 2 sources, such as teletherapy, gamma knife, well-logging devices, and self-shielded irradiators. On September 6, 2005, NRC announced that over approximately the next 90 days, affected licensees will receive orders from the agency spelling out increased controls for certain radioactive materials. Over the same period, individual Agreement States will issue their licensees legally binding requirements essentially identical to NRC’s orders. Materials covered by these requirements will be consistent with the IAEA Code of Conduct. Regarding source tracking, in November 2003, NRC, with the assistance of the Agreement States, identified and initially surveyed approximately 2,600 entities licensed to possess IAEA categories 1 and 2 sources. The resulting interim inventory will supplement other information NRC intends to use in developing a national source tracking system. Regarding DOE efforts, DOE officials told us that various department offices have been involved in developing, reviewing, and issuing domestic and international guidance related to the security of sealed radiological sources. Moreover, DOE has established its own source tracking system—that is, the Radioactive Source Registry and Tracking System—which, among other things, includes the unwanted sealed radiological sources that DOE has recovered from licensees. In addition to securing and tracking sealed radiological sources, IAEA and NRC support the disposal of unwanted sources and other radioactive waste. IAEA contends that although waste may be safely stored for decades, as long as institutional controls are maintained, progress must be made toward permanent disposal. According to the Director General for Energy and Transport, European Commission, “the sources at greatest risk of being lost from regulatory control are disused (unwanted) sources held in local storage at the user’s premises waiting for final disposal or return to manufacturer.” In response to an international joint convention addressing spent nuclear fuel and radioactive waste management, IAEA set forth the elements of an effective national legal and organizational structure that would provide for the safe and secure management of radioactive waste by appropriate national authorities. One of the key indicators of such a structure is that “the amount of waste in storage awaiting disposal should depend only upon operational considerations…and should not include a backlog due to an inability (technical, financial, organizational, etc.) to reduce the backlog.” NRC also supports the disposal of low-level radioactive waste but has placed no time limits on storage, as long as the radioactive material is safe and secure. NRC contends that it is acceptable to allow some licensees to store a backlog of sources in instances where a disposal option for this waste is not available to them. In August 2005, the President signed into law the Energy Policy Act of 2005, which, among other things, addressed the safe disposal of GTCC waste and nuclear facility and materials security. The act requires DOE to prepare plans for the continued recovery of sealed radiological sources and to report on its efforts to develop a GTCC waste disposal site. Other provisions call for NRC to issue regulations establishing a mandatory tracking system for radiation sources in the United States and to chair a task force on radiation source protection and security. The task force, comprised of NRC, DOE and other federal agencies, in consultation with other groups, is to evaluate and provide recommendations relating to the security of radiation sources in the United States from potential terrorist threats, including acts of sabotage or theft or the use of radiation sources in a radiological dispersal device. DOE has placed increased emphasis on its source recovery project and has begun to assess disposal options for GTCC waste. DOE has realigned its source recovery project within NNSA to more effectively respond to both domestic and international threats posed by unwanted sealed radiological sources. Further, DOE has accelerated its recovery efforts, surpassing an earlier source recovery goal, and has made progress in resolving a storage space shortage at its facilities that has slowed the recovery of certain unwanted sealed radiological sources. Finally, DOE has begun preparing an environmental impact statement to assess possible disposal options for GTCC waste. However, difficulties in estimating current GTCC waste storage and future waste volumes, especially from sealed radiological sources, will complicate this effort. Further, DOE has not yet determined when a permanent GTCC waste disposal facility will be available. To better respond to the security threats posed by unwanted sealed radiological sources both within the United States and abroad, in October 2003, DOE realigned management responsibilities for its source recovery project from the Office of Environmental Management to NNSA. This realignment was, in part, a response to a recommendation to the Secretary of Energy that we made in our April 2003 report that the priority given to its Off-Site Source Recovery Project be commensurate with the threat posed by some unwanted sealed radiological sources. Subsequently, NNSA established the Nuclear and Radiological Threat Reduction Task Force, under the Office of Defense Nuclear Nonproliferation, to unite all of the department’s radiological threat reduction efforts. One of the principal missions of this task force is to identify; secure; and store, on an interim basis, radioactive materials that could be used as a radiological weapon. In May 2004, DOE announced the creation of the Global Threat Reduction Initiative, which further elevated the importance of the task force and DOE’s recovery of sealed radiological sources. This initiative was later institutionalized in the Office of Global Radiological Threat Reduction, with a domestic component, the U.S. Radiological Threat Reduction Program, and an international component, the International Radiological Threat Reduction Program. The Off-Site Source Recovery Project was subsumed under the U.S. Radiological Threat Reduction Program, but the program retained Los Alamos National Laboratory personnel to continue the source recovery effort. DOE accelerated the recovery of unwanted sealed radiological sources beginning in late 2002. As we reported in April 2003, DOE’s ability to meet planned recovery activities was largely facilitated by supplemental congressional funding and by the urging of NRC to accelerate recovery efforts in light of the events of September 11, 2001. In August 2002, the Congress provided an additional $10 million to DOE’s Off-Site Source Recovery Project to recover 5,000 unwanted sealed radiological sources over the following 18 months. Between October 1, 2002, and March 31, 2004, DOE recovered 5,529 of these sources, exceeding its recovery goal and more than doubling the number of sources previously recovered since 1996. As of June 7, 2005, DOE had recovered 10,806 of these sources. According to the source recovery project leader, the bulk of the remaining excess and unwanted sealed radiological sources in the United States should be recovered in the next 2 years. Table 2 contains a summary of DOE-recovered sealed radiological sources, by radionuclide, as of June 7, 2005. DOE has maintained its source recovery project efforts through annual and supplemental appropriations. In our April 2003 report, we recommended that the Secretary of Energy ensure that adequate resources be devoted to covering the costs of recovering and storing unwanted sealed radiological sources as quickly as possible. In a September 2004 congressional hearing, the director of DOE’s Office of Global Radiological Threat Reduction testified that the department had increased funding for the source recovery project and had committed funds for continuing these efforts. The director stated that the fiscal year 2004 program budget was $1.96 million, not including about $3.49 million that was added to the budget to respond in part to unexpected requests from NRC to recover sources of security concern. In fiscal year 2005, the source recovery project budget was increased to $5.6 million; for fiscal year 2006, DOE has requested $12.8 million, in part, to better fund the expanded scope of the U.S. Radiological Threat Reduction Program. The source recovery project leader has estimated an average recovery cost of $3,000 per source, on the basis of the initial 10,000 sources recovered, not including commercial disposal costs for certain sources. DOE plans to continue recovering unwanted sealed radiological sources, at least until a GTCC waste disposal site is available. In our April 2003 report, we recommended that the Secretary of Energy develop a plan to ensure the continued recovery and storage of unwanted sealed radiological sources until a GTCC waste disposal site is available. We reported that DOE used several sources of information and made three key assumptions when projecting the anticipated need to recover 14,309 sources between fiscal years 1999 and 2010. The assumptions were that (1) a permanent disposal site for the sources would be available by fiscal year 2007; (2) the Off-Site Source Recovery Project would continue to recover sources from certain holders of sources during a transition period from fiscal years 2007 through 2010; and (3) after fiscal year 2010, all unwanted sealed radiological sources would be shipped by their owners to a disposal site, and the Off- Site Source Recovery Project would cease operations. However, according to the manager of DOE’s U.S. Radiological Threat Reduction Program, these assumptions are no longer used by the department because the lack of a firm date for when a GTCC waste disposal site will be available means that DOE cannot determine when it will cease recovering unwanted sealed radiological sources from licensees. The Energy Policy Act of 2005 requires DOE to submit a plan to the Congress that ensures the continued recovery and storage of unwanted sealed radiological sources that pose a security threat until a permanent GTCC waste disposal facility is available. Further, this DOE manager told us that source recovery project personnel may still be needed to help some licensees to meet the packaging requirements of any future GTCC waste disposal facility. DOE has taken actions to address the storage space shortage that has prevented the recovery of certain types of unwanted sealed radiological sources. We reported in April 2003 that DOE had inadequate storage capacity to meet the higher security needs for recovered sealed radiological sources containing plutonium-239, and lacked a means for temporarily storing sources containing strontium-90 and cesium-137. We recommended, among other things, that the Secretary of Energy take immediate action to provide storage space for these sources at a secure DOE facility. According to the director of DOE’s Office of Global Radiological Threat Reduction, as of September 2004, DOE had developed sufficient storage space at the Los Alamos National Laboratory and the Nevada Test Site to recover more than 260 plutonium-239 sealed radiological sources registered by licensees for collection. According to the source recovery project team leader, DOE’s plan has been to recover over 100 remaining plutonium-239 registered sources, representing approximately 60 drums of waste; ship them to the Nevada Test Site; and then incrementally transfer them to the Los Alamos National Laboratory as space is made available from the shipment of the existing stored plutonium-239 sources to the Waste Isolation Pilot Plant (WIPP) in New Mexico. WIPP will only accept sources that are shipped from Los Alamos. Implementation of this plan, however, has been delayed pending final approvals to ship these sources between locations. Additional progress has been made in addressing the storage issues that relate to unwanted strontium-90, cesium-137, and some cobalt-60 sealed radiological sources. According to the source recovery project team leader, DOE has recovered a strontium-90 radioisotopic thermoelectric generator that was owned by the department and used as a remote power supply and disposed of the generator at the Nevada Test Site. DOE also has recovered six of these devices that were commercially owned and is storing them at the Los Alamos National Laboratory, pending approval for disposal as waste. Regarding the cesium-137 sealed radiological sources, the source recovery project has recycled 5 large cesium-137 irradiators to commercial firms. DOE has also contracted to recover the remaining 14 registered irradiators by the end of fiscal year 2005. Moreover, the team leader told us that the source recovery project plans to collect 221 cobalt-60 sources from a university this summer and to dispose of them at the Nevada Test Site as DOE-owned nuclear material. DOE has begun to take action to identify a suitable location for the disposal of GTCC waste, but producing useful estimates of the current storage and future generation of this waste will be difficult. We reported in April 2003 that DOE had not made progress toward providing for a permanent disposal facility for the nation’s GTCC waste, and that it was unlikely to provide such a facility by fiscal year 2007 because developing a disposal site for this waste was considered a low priority within the department. We recommended that the Secretary of Energy initiate a process to develop a permanent disposal facility for GTCC waste, including empowering an office to take on this responsibility. In September 2004, DOE took a first step in this direction by transferring responsibility for assessing disposal options for GTCC waste from its Office of Environment, Safety, and Health to its Office of Environmental Management. With this authority and the heightened need to take action, on May 11, 2005, the Office of Environmental Management published an advance notice in the Federal Register of its intent to prepare an environmental impact statement (EIS) for GTCC waste disposal. DOE now anticipates that the actual notice of intent to prepare the EIS will be issued in the fall of 2005, followed by public meetings to further define the scope of the EIS and to identify significant issues to be addressed. The DOE document manager for the EIS told us that after the notice of intent is issued, the process of preparing the EIS could take 2 years. The Energy Policy Act of 2005 requires that, within 1 year, DOE report to the Congress on the estimated costs and a proposed schedule to complete both the EIS and a record of decision for a permanent disposal facility for GTCC waste. Moreover, before DOE makes a final decision on the long-term disposal alternative or alternatives to be implemented, this act requires DOE to prepare a report to the Congress describing all alternatives under consideration, including recommendations for ensuring the safe disposal of GTCC waste, and then to await action by the Congress. Therefore, it is not possible for DOE to determine when a permanent disposal facility will be available for GTCC waste. In his September 2004 congressional testimony, the director of DOE’s Office of Global Radiological Threat Reduction, stated that the EIS for GTCC waste disposal will include an analysis of waste inventories, long- term disposition alternatives, and resource requirements—as well as an assessment of legislative, regulatory, and licensing requirements. According to the director, the broad scope of the EIS should enable DOE to consider any new or existing site, facility, and disposal method for GTCC waste. Possible locations and disposal options include commercial, DOE, or other governmental facilities and private land. The disposal methods examined will range from deep geologic disposal to enhanced near-surface disposal, depending on the type of GTCC waste. In completing the EIS, DOE plans to inventory the GTCC waste in storage at licensee and DOE facilities as well as estimate the waste expected to be generated in the future. According to the DOE document manager for the EIS, the department will obtain information on nuclear utility and DOE GTCC waste that is currently in storage and will estimate future volumes over the next 30 to 50 years on the basis of a representative sample of some nuclear power plants that are being decommissioned, and from existing DOE databases. For nonutility licensees, the information on the storage and projected generation of GTCC waste will be more speculative. This official said that DOE has selected a contractor to update the estimates made in a 1994 DOE report that the department now considers outdated. DOE asked the contractor to begin with the methodology used in the 1994 report to estimate current GTCC waste storage and to project future generation of these wastes by nonutility licensees, rather than attempt to survey all NRC and Agreement State licensees that might possess these radioactive materials. Attempting to obtain information on nonutility licensee storage of GTCC waste that can be used to estimate future generation of GTCC waste from sealed radiological sources will be especially difficult. Of the three types of GTCC waste, the second largest volume behind activated metals is from sealed radiological sources. Uncertainties surround producing these estimates, such as (1) how to determine the quantities of unwanted sealed radiological sources in storage and (2) how much waste and what class of waste might be generated once these sources are packaged for disposal. One estimating problem is that there is currently no standard process by which licensees declare their sealed radiological sources as disused (unwanted). According to an NRC official, sealed radiological sources would not be considered waste, even if they are stored unused by a licensee, until the licensee has determined that they are no longer useful. In addition, sealed radiological sources that are no longer useful may be returned to the source manufacturer or allowed to decrease in radioactivity concentration while in storage so that they can be disposed as a lower level waste class. Because licensees typically do not declare their disused (unwanted) sources as waste until they are packaged and ready for shipment to a waste broker or disposal site, it will be difficult for DOE to project when this type of waste might need disposal in a GTCC waste disposal facility. Another uncertainty in estimating the future quantities of GTCC waste is that the volume of waste generated by a small sealed radiological source is determined by the size of its disposal container and not by the size of the source or number of sources in the container. Disused sources are typically placed in 30-gallon or 55-gallon disposal drums. The number of sources put into one drum and the packing materials used are affected by the acceptance criteria of the disposal site. Figure 2 shows a sequence of photographs depicting source recovery project personnel removing a 5- curie, plutonium-239/beryllium source and repackaging it into a 55-gallon drum especially designed to meet the acceptance criteria at WIPP. Source recovery project personnel told us that these drums cost between $5,000 and $6,000 each. The sealed radiological source held in pliers in the first photograph is clearly a fraction of the size of the 55-gallon disposal drum. Figures 3 through 5 show photographs that illustrate the scale of sealed radiological sources relative to their devices as well as how the sources or their devices are packaged into more traditional disposal drums. Yet another uncertainty in projecting the future volume of GTCC waste from sealed radiological sources is that different types of radionuclides can comprise the sources used in a device, and, depending on the radionuclide used, the age of the source, and how the source is packaged for disposal, the device can fall into different classes of waste. For example, as shown in table 4 in appendix II, six different radionuclides can be used as the source in an industrial radiography device. Further, the sources that can be used in this industrial radiography device can produce non-GTCC and GTCC waste, depending in part on how much radioactivity remains in the source when it is disposed of and how the source is packaged. For example, a 5- curie, cesium-137 sealed radiological source that is used in a device might fall into a GTCC waste class when packaged if little of the source is depleted; but once it becomes unwanted and then packaged in a 55-gallon disposal drum with nonradioactive filler material, it might fall into the non- GTCC waste class because its radioactivity, as averaged over the entire volume of the drum, would be lower. DOE has expanded its source recovery efforts to include all sealed radiological sources that could present a threat, a change that could increase project expenditures. DOE’s source recovery project now includes, among other activities, the recovery and commercial disposal of non-GTCC waste from unwanted sealed radiological sources that pose a health, safety, security, or environmental threat. The recovery and commercial disposal of more of these types of sealed radiological sources from licensees that cannot afford to dispose of them today, in addition to the recovery of higher radioactive sources, is likely to increase DOE project expenditures. Further, DOE may need to recover even more non- GTCC waste from unwanted sealed radiological sources in the future if licensees in many states lose access to the only commercial low-level radioactive waste disposal site where they can currently dispose of higher radioactive non-GTCC waste (classes B and C waste). This increased recovery of non-GTCC waste from sealed radiological sources will place greater demands on source recovery project expenditures because of impediments to DOE’s recouping recovery costs from licensees that could otherwise cover their source disposal costs if there were disposal availability. In the absence of access to commercial disposal, DOE anticipates the need to indefinitely store the recovered non-GTCC waste until a commercial disposal option becomes available. DOE’s current policy does not include using DOE sites to permanently dispose of this waste because, among other reasons, it does not want to undermine the authority the Congress gave to the states to provide disposal availability for non-GTCC waste. The expanded scope of the source recovery project now includes, among other activities, the collection and commercial disposal of non-GTCC waste from unwanted sealed radiological sources that pose a health, safety, security, or environmental threat. Responsibility for the safe management and disposal of these radioactive materials is normally held by those entities that NRC or the Agreement States license to possess and use these materials. However, in some cases, licensees are unable to (1) ensure the safe and secure use of these materials or (2) cover the disposal costs of their unwanted sealed radiological sources. For example, according to the source recovery project leader, at the request of NRC, DOE commercially disposed of its first significant quantities of non-GTCC waste during fiscal year 2004. Source recovery project personnel collected 443 unwanted sealed radiological sources (containing cesium-137, cobalt-60, or radium- 226) from a bankrupt firm in Pennsylvania and commercially disposed of most of them at the Barnwell, South Carolina, disposal site. In commenting on a draft of this report, DOE provided examples of other non-GTCC waste from sealed radiological sources that it had recovered. Under the expanded scope of the source recovery project, DOE has developed a priority scheme for deciding which sources to recover and when to do so. According to the director of DOE’s Office of Global Radiological Threat Reduction, DOE has been working with the Department of Homeland Security and other agencies, in addition to NRC, to determine the sources that should receive the highest priority for recovery, including those that when disposed of would not be considered GTCC waste. In addition, the manager of DOE’s U.S. Radiological Threat Reduction Program told us that DOE and NRC are also in the process of revising the 1999 memorandum of understanding that defined the responsibilities of each agency with respect to the problem of unwanted and uncontrolled sealed radiological sources to better reflect current DOE recovery practices. The source recovery project leader provided us with an initial priority ranking scheme for recovering sources that is used by DOE, as well as some other factors that DOE considers. The initial ranking involves combining three factors into an overall risk ranking for each licensee site that contains sealed radiological sources. These factors include the level of security over the source at a licensee site, the total quantity of radioactive material present, and the quantity of radioactive material in any single sealed radiological source to a licensee site. Other factors that DOE considers when prioritizing sources at recover include the opportunity of recovering additional unwanted sealed radiological sources that source recovery personnel may discover during their visit at a licensee site. For example, the source recovery project leader told us that if team members come across vulnerable sealed radiological sources of lesser radioactivity at a location where they are recovering higher radioactive sources, they will collect them as well. DOE has already incurred additional expenses to recover and commercially dispose of non-GTCC waste from unwanted sealed radiological sources. It cost DOE approximately $581,000 to recover hundreds of these sources that had accumulated at a bankrupt firm in Pennsylvania and to commercially dispose of them. The Barnwell disposal site received 15 of the 16, 55-gallon and 30-gallon drums of this non-GTCC waste and charged DOE a $1,650 per-cubic-foot disposal fee. For example, the disposal fee and container cost for just 1, 55-gallon disposal drum holding 130 of the recovered cesium-137 sealed radiological sources cost DOE about $21,000, not including labor, transport, and other costs. Additional DOE recovery of non-GTCC waste from licensees that currently need to store their sources and other waste because they do not want to or cannot pay these high disposal fees may be necessary in the future. According to the deputy director of DOE’s Office of Global Radiological Threat Reduction, because of the cost involved, encouraging those licensees that have sealed radiological sources to dispose of them properly has proven difficult, particularly with entities that only have a few sources. NRC can impose fines as high as three times the cost of commercial disposal on a licensee that fails to properly dispose of radioactive material. However, a senior NRC official has publicly acknowledged the difficulty that licensees with only a few unwanted sources have in finding a cost-effective means for disposing of them. DOE is currently impeded from recouping more of its recovery and storage costs for GTCC waste as well as any non-GTCC wastes that it may need to recover. Regarding GTCC waste, since DOE issued its 1987 report on how it planned to address its responsibilities under the Low-Level Radioactive Waste Policy Act of 1980, as amended, no specific action has been taken to identify a different method of funding the source recovery project, other than through the appropriations process. According to the manager of DOE’s U.S. Radiological Threat Reduction Program, DOE has been unable to establish a standard fee for recovering unwanted sealed radiological sources from licensees because existing cost recovery mechanisms require the department to know both the number of years that these sources will be stored and the cost of their disposal before setting a fee, which is not currently possible. Regarding non-GTCC waste, the sources recovered to date were primarily from a commercial firm that had gone bankrupt and did not have the necessary funds to cover the cost of disposing of its sources. DOE had to cover the recovery and commercial disposal costs because there was no other source of funding. One of the reporting requirements for the task force on radiation source protection and security, required under the Energy Policy Act of 2005, is to provide recommendations for appropriate regulatory and legislative changes for the establishment of, or modification to, a national system (including user fees and other methods) to provide for the proper disposal of sealed radiological sources under the act. In the future, DOE may have to recover more non-GTCC waste from sealed radiological sources if licensees that are forced to store their unwanted sources because they have no access to a disposal site. As we reported in June 2004, if South Carolina follows through with plans to restrict access to the Barnwell disposal site to only the three member states of the Atlantic Compact by mid-2008, and if no disposal alternative for the more highly radioactive non-GTCC waste (classes B and C waste) is developed, licensees in 36 states that are presently allowed to use this site will need to store more of their unwanted radioactive materials. Although NRC does not place time limits on the storage of radioactive materials as long as they are safe and secure, greater quantities and longer periods of storage, particularly of unwanted sealed radiological sources, will likely increase safety and security risks. In January 2002, NRC sent a letter to DOE requesting that the source recovery project take actions to recover registered unwanted sealed radiological sources because the possession and storage of these sources with no GTCC waste disposal outlet represented a potential health and safety threat. Regarding non-GTCC waste from unwanted sealed radiological sources, the manager of DOE’s U.S. Radiological Threat Reduction Program told us that DOE will likely need to increase the recovery of these sources if licensees have no commercial disposal option for this waste. Domestic and international experts contend that the lack of disposal availability for unwanted sealed radiological sources can increase their risk of abandonment, misplacement, and theft. For example, the Health Physics Society stated that the lack of a GTCC and non-GTCC waste disposal option for unwanted sealed radiological sources that pose security and public health concerns will continue to increase the number of orphan sources. Further, IAEA has reported that disused (unwanted) sources represent the largest pool of vulnerable and potential orphan sources. If DOE were to begin recovering more non-GTCC waste from unwanted sealed radiological sources, even greater demands will be placed on DOE recovery project resources if DOE cannot recoup some of its recovery costs from licensees. While DOE is justified in covering the recovery and commercially disposal cost of the non-GTCC waste it has collected from licensees that could not afford to dispose of it themselves, the department may be able to recoup some of its costs in the future from licensees that could afford the cost of disposal if it were commercially available. It is difficult to estimate the budgetary impact on DOE if there were a need to increase the recovery of unwanted sealed radiological sources from licensees that have no access to a commercial disposal site for their higher radioactive non-GTCC waste. One reason for this situation is the lack of information on the number of sources in storage that might need DOE recovery. As we reported August 2003, there is no national database on the quantities of sealed radiological sources in storage. Moreover, there is no national database that tracks the storage of any low-level radioactive waste. Given the lack of national data on how much waste is generated annually, the disposal data from low-level radioactive waste disposal operators can only provide an indication of the quantity of disused or unwanted sealed radiological sources and other waste that might need storage each year in the absence of disposal availability. Nevertheless, we found that between 2001 and 2004, the Barnwell disposal site disposed of, on average, 31,150 cubic feet of the higher radioactive non-GTCC waste (classes B and C waste), of which about 588 cubic feet, or about 2 percent of the total, was derived from disused sealed radiological sources. Approximately one-half of the sealed radiological source waste (about 56 percent) came from private industry, followed by government agencies (about 25 percent), colleges and universities (about 11 percent), and medical waste (about 4 percent). If DOE recovered, took title of, and commercially disposed of all non-GTCC waste from sealed radiological sources that are sent to the Barnwell disposal site annually, it might cost DOE approximately $1 million a year just to cover the disposal cost at the current $1,700 cubic foot disposal fee rate. However, until DOE has better information on the number of sources that may need to be recovered and future disposal costs, including recovery, packaging, transport, and other costs, it will be difficult to accurately estimate future costs of recovering non-GTCC waste. If licensees lose access to commercial disposal sites for their higher radioactive non-GTCC waste in the future, DOE will likely have to recover more of this waste from unwanted sealed radiological sources, which could heighten interest in using DOE sites for disposal of these wastes. The manager of DOE’s U.S. Radiological Threat Reduction Program told us that although DOE is not legally prohibited from permanently disposing of, at DOE sites, the recovered non-GTCC waste for which it has taken title, it would not want to do so. This DOE manager said that on the basis of current policy, DOE would indefinitely store any recovered non-GTCC waste from unwanted sealed radiological sources at its sites until commercial disposal is available or DOE receives other congressional guidance. The DOE manager provided three reasons to justify this current policy. First, DOE does not want to undermine the responsibility given by the Congress to the states to provide disposal availability for non-GTCC waste under the Low-Level Radioactive Waste Policy Act of 1980, as amended. Second, DOE is not allowed to compete with commercial waste companies for the disposal of non-GTCC waste. Finally, DOE does not want to dispose of the relatively small quantity of recovered non-GTCC waste at its sites because this might set a precedent for disposing of all non-GTCC waste that does not have a commercial disposal pathway. However, in lieu of storing this non-GTCC waste, this DOE manager suggested that DOE could, under emergency access provisions, approach the regulatory bodies that have jurisdiction over commercial disposal sites to obtain disposal access. Despite DOE’s current policy regarding what it would do in the future with recovered non-GTCC waste if there were no commercial disposal availability, there have been calls to consider using DOE sites for the disposal of this waste. Our June 2004 report discussed some issues that would need to be resolved to use DOE sites for this waste, including the feasibility of DOE’s accepting all non-GTCC waste, the responsibility for paying for the disposal of this waste, and the licensing and regulatory responsibilities covering its disposal. DOE lacks information that would assist in its efforts to identify and recover unwanted sealed radiological sources that pose a safety or security risk. Although DOE maintains an inventory of recovered sealed radiological sources and sources registered for future recovery, neither DOE nor any other government agency has centrally tracked the number of sources in the United States or the number of unwanted sources in storage at licensee sites across the country. Under the current regulatory structure, NRC and Agreement states only know the authorized uses and maximum quantities allowed for each licensee, not what they actually possess. As a result, DOE has no means of determining the actual number of sealed radiological sources that may require recovery in the future. NRC is currently developing a national source tracking system to, among other things, identify the possession and movement of some high-risk sealed radiological sources. However, as presently designed, this tracking system lacks information that DOE might find useful in planning and budgeting for the recovery of unwanted sealed radiological sources and their eventual disposal. The source recovery project maintains its own inventory of sealed radiological sources that have been recovered and are in storage, and those that licensees or NRC have asked DOE to recover. According to the source recovery project team leader, the accuracy of the information on a sealed radiological source in this inventory improves from when a licensee initially registers the source; to when source recovery personnel have follow-up conversations with the licensee to clarify the recovery request for the source; to when the source recovery project team actually visits the site to physically inspect the source, record its serial number, and package it for disposal. The source recovery project team leader told us that the information on sources initially registered is less accurate because the licensee may not know anything about their source, or a licensee might inadvertently provide incorrect information about the source, such as its radionuclide and radioactivity concentration. Once recovered, the information in the source recovery project inventory includes the type of radionuclide, serial number, size, radioactivity concentration, and method of packaging for storage or disposal. The source recovery project team leader told us that this inventory is designed to assist in administrative planning, scheduling and prioritizing recoveries, tracking shipments, and documenting storage or disposal locations. Information on the recovered sealed radiological sources in DOE’s possession is then integrated into DOE’s Radiological Source Registry and Tracking System. This departmentwide inventory system was established in November 2003, in response to a recommendation of the DOE/NRC Interagency Working Group on Radiological Dispersal Devices. The tracking system is managed by DOE’s Office of Plutonium, Uranium, and Special Materials Inventory and maintained at Sandia National Laboratories. DOE designed its system to help (1) monitor the safety and security of all DOE-owned sealed radiological sources that meet a certain threshold size and radioactivity concentration and (2) provide information on the potential threat they pose. In addition to descriptive information on the type of sealed radioactive source and its location within the DOE complex, this tracking system also records data on the source’s status— such as whether the source is in active use; is inaccessible and, thus, not being used; is in storage for potential future use; or is packaged and awaiting final disposal. Because neither DOE nor any other government agency has centrally tracked the number of sealed radiological sources in the United States at any given time or the number of unwanted sources held by NRC and Agreement States licensees, DOE has few available means of estimating the quantities of sources that may need recovery in the future. Under the current regulatory structure, NRC and the Agreement States only have information on the authorized uses and maximum quantities of radioactive materials licensees are allowed to possess, although each licensee is responsible for maintaining inventories of its individual sources. Further, the source recovery project inventory contains only information that licensees have voluntarily provided to DOE on their unwanted sealed radiological sources and more limited voluntary registration of sources that may require recovery in the future. The information on sealed radiological sources that NRC provides to DOE for scheduling recovery only captures those sources that NRC or Agreement States are aware of that need recovery and does not include sources that licensees may possess that are unwanted. Consequently, neither of these methods for obtaining information provides the kind of data that DOE can use to estimate future quantities of sealed radiological sources that may need recovery. According to the manager of DOE’s U.S. Radiological Threat Reduction Program, because the source recovery project has no information on the number of sources in current use or in storage, DOE is limited in its ability to provide useful estimates of the quantities of sealed radiological sources that DOE might need to recover in the future. NRC plans to develop a national source tracking system that will register certain sealed radiological sources possessed by licensees and/or DOE. In November 2003, NRC, in cooperation with the Agreement States, contacted 2,600 entities licensed to possess IAEA categories 1 and 2 sources in an effort to capture for the first time national data on the actual type, quantities, and current ownership of these sources. Over 99 percent of these licensees voluntarily reported information back to NRC, but only about one-half of them reported that they possessed these sources. NRC has already conducted a follow-up survey of a portion of these licensees, and other surveys are planned leading up to an implementation of the national source tracking system in 2007. Although licensees are requested to volunteer information for these interim surveys, NRC issued a proposed rule in July 2005 that would, among other things, require licensees to provide an inventory of their sealed radiological sources; annually verify and reconcile their actual inventory with the information registered in the system; and report certain transactions, such as the date of manufacture, transfer, or disposal of their sealed radiological sources. The Energy Policy Act of 2005 requires that NRC issue regulations, within 1 year, establishing this mandatory tracking system that shall be coordinated with systems established by the Department of Transportation to track the shipment of radiation sources. Such a tracking system must, among other things, provide for the reporting of required information through a secure Internet connection. As presently designed, NRC’s national source tracking system will inventory and monitor primarily IAEA categories 1 and 2 sources—the minimum required under the 2004 IAEA Code of Conduct—despite support from IAEA and DOE for tracking additional source categories and other information. In its July 2003 technical document detailing the methodology behind the IAEA source categorization scheme, IAEA suggested that member states consider the combined radioactivity of aggregated sealed radiological sources in one location for the purpose of categorizing these sources on the basis of their potential to cause harm to human health. Using this methodology, the accumulation of enough individual IAEA category 3 sources in close proximity to one another would yield concentrations of radioactive material equivalent to a single IAEA category 2 source. For example, storing 15 well-logging devices in close proximity (each well-logging device typically contains a 2-curie, cesium-137 source, which is an IAEA category 3 source) would be equivalent to having a 30-curie, cesium-137 source in this location, which is an IAEA category 2 source. Almost all of the unwanted sealed radiological sources recovered by DOE would fall into categories below IAEA categories 1 and 2 and, therefore, would not have been registered in the national source tracking system as presently designed. According to the manager of DOE’s U.S. Radiological Threat Reduction Program, over 90 percent of the sites where DOE has recovered sealed radiological sources had quantities of lesser radioactive sources that when aggregated were equivalent to an individual IAEA category 2 source and, thus, posed enough of a safety and security risk to warrant their recovery. This recovery has been justified despite the fact that the total curie level of all the recovered IAEA category 3 sources was only about 15 percent of the curie level of the relatively few recovered categories 1 and 2 sources, and without regard to whether the sources might or might not have been located in close proximity at each of the licensee sites. In a 2004 technical document, IAEA suggested that it would be beneficial from both a safety and security viewpoint for all disused or unwanted sealed radiological sources to be identified and to undergo proper disposition. According to IAEA, the quality of a country’s national registry of radioactive sources will be a prime indicator of the probability of there being vulnerable and orphan sources. History has shown that many accidents involving orphan sources come about because sources that are no longer in use are eventually forgotten, with subsequent loss of control years later. Table 3 shows a breakdown of the sealed radiological sources that DOE has recovered, by their IAEA source category, as of June 7, 2005. As shown in the table, about 98.5 percent of these sources fall below category 2 and, therefore, would not have been tracked in the proposed national source tracking system. In the proposed rule to implement a national source tracking system, NRC states that it does not plan to include IAEA category 3 sources in the registry at this time, but that it may consider doing so in the future because licensees possessing a large quantity of IAEA category 3 sources could present a security concern. Although NRC contends that reliable tracking of the accumulation of IAEA category 3 sources will be difficult and might pose a potential burden on licensees, NRC is seeking comments on the inclusion of these sources in its tracking system. NRC stated in its notice of intent that one way to address the accumulation of sources of concern would be to lower the threshold for source tracking to include all IAEA category 3 sources, since a source level tracking system cannot include aggregation of sources because the sources may move in and out of the tracking system with the change of ownership. However, in commenting on a draft of this report, NRC stated that in lieu of the inclusion of category 3 sources in the proposed national source tracking system at this time, its new security orders for licensees possessing IAEA categories 1 and 2 sources do, where appropriate, address aggregation of any sources below these two categories, such that the net result could reach the category 2 threshold in a given physical location. Nevertheless, it does not appear that these new security orders would apply to licensees that do not possess IAEA categories 1 and 2 sources but still have large accumulations of IAEA category 3 or lesser source categories. The national source tracking system, as designed, also would not collect other information that DOE might find useful in budgeting and planning for source recovery and future disposal needs for GTCC waste. Recent IAEA technical guidance states that it is important to capture information on the frequency of use of the source in a national registry of sealed radiological sources—for example, whether the source is actually being used or whether it is being stored securely. DOE already inventories such information on sources in its possession in its Radioactive Source Registry and Tracking System. DOE initially requested that NRC collect information on licensees’ disposal plans in its interim survey, including whether the licensees were planning to have DOE recover their sources. NRC included this question in its first survey of licensees but has decided to drop it in subsequent surveys and in the design of the tracking system, because of the low response rate to this question and because its security regulations currently do not require licensees to report this information. However, NRC is contemplating adding a feature to the design of its anticipated national source tracking system that would capture information on the long-term storage of some sealed radiological sources, although it would be voluntary for licensees to provide this information. The Energy Policy Act of 2005 requires NRC to chair an interagency task force on radiation source protection and security. Within 1 year of its creation, the task is to prepare a report to the Congress and the President providing recommendations for a list of additional radiation sources that should be required to be secured as well as any necessary modifications to the national source tracking system. In addition, the task force is also charged with making recommendations in this report regarding the creation of, or modification to, procedures for improving, among other things, the security of stored sources, including periodic audits or investigations by NRC to ensure that these sources are properly secured and can be fully accounted for. DOE and NRC have important roles and responsibilities in ensuring the safety and security of radiological sealed sources. The recently enacted Energy Policy Act of 2005, among other things, adds new requirements for both agencies, including the creation of a task force on radiation source protection and security, chaired by NRC, and continued recovery by DOE of unwanted sources until it provides a disposal site for GTCC waste. The responsibilities for DOE may expand further if licensees in most states lose access to the only disposal site for their higher radioactive non-GTCC waste by mid-2008. Specifically: Loss of access would increase the quantities of non-GTCC waste in storage that could necessitate more recovery of this waste by DOE. This, in turn, might lead to increased costs for DOE’s source recovery efforts. However, how much additional funding will be necessary for this effort would be difficult to ascertain for several reasons, including uncertainties regarding the quantity of non-GTCC waste that might need collection. These increased recovery and disposal costs will be incurred by DOE unless other mechanisms are adopted to recoup these costs, especially from those licensees that would be able to cover them if commercial disposal were available. The increasing quantities of non-GTCC waste that will not have a commercial disposal pathway could heighten interest in using DOE sites for the disposal of this waste. The lack of information to track the number and status of sealed radiological sources that may require recovery and disposal in the future, limits DOE’s ability to effectively plan and budget for its recovery and disposal efforts and to monitor the performance of its source recovery project. We recommend that the Secretary of Energy and the Chairman of the Nuclear Regulatory Commission, in collaboration with the Task Force on Radiation Source Protection and Security, evaluate and report on the cost implications of a potential expansion of DOE’s recovery and disposal of non-GTCC waste from sealed radiological sources, options for DOE to recoup these costs from licensees that may have no commercial waste disposal options, the feasibility of disposing of this waste at DOE sites, and how a national source tracking system can be designed and implemented to improve DOE’s ability to identify and track sealed radiological sources that may need DOE recovery and disposal. We provided a draft of this report to DOE and NRC for their review and comment. DOE’s written comments are reproduced in appendix III. DOE stated that it generally supports the recommendations contained in this report. More specifically, DOE commented that we had correctly reported the department’s position with respect to recouping recovery and disposal costs; however, the department expressed some concern that charging fees or recouping costs from licensees may inhibit them from registering sources, leaving these excess sources at risk. We acknowledge in the report that DOE should cover the recovery, storage, and disposal costs of unwanted sealed radiological sources that were previously owned by DOE. We also acknowledge the need for DOE to cover these costs in cases where sources posing a health, safety, security, or environmental threat are recovered from licensees that do not have the financial means to ensure their proper disposal. Nevertheless, given the possibility that, in most states, there may not be a commercial disposal option available to licensees for their higher radioactive non-GTCC waste after mid-2008, we continue to believe that DOE and NRC should evaluate approaches to recoup recovery and disposal costs from licensees that could otherwise afford to cover these costs if a commercial disposal option were available. DOE also stated that, in addition to the non-GTCC sealed source waste that we stated it recovered and disposed, it had also recovered other sources that fall into this waste class. We added a reference to these other sources in the report. Regarding using DOE sites for non-GTCC waste disposal, the department commented that we appropriately noted its current policy and statutory responsibilities that prohibit the use of department facilities for this purpose. DOE stated that it would continue to identify potential commercial treatments or disposal options for any additional non-GTCC waste that is recovered. Finally, DOE concurred with our assessment that the proposed national source tracking system should be improved to assist the department in identifying and recovering unwanted sources from outside the department that pose a potential safety and security risk. DOE stated that its Office of Security is working with other elements of the department and NRC in developing requirements to ensure that these unwanted sources are adequately tracked. NRC also provided written comments to a draft of this report, which are reproduced in appendix IV. NRC stated that overall our report was well written and balanced. While NRC did not specifically agree or disagree with our four recommendations, its letter raised seven issues regarding the proposed national source tracking system. 1. NRC stated that its tracking system would provide some information useful to DOE. We agree that the national source tracking system might provide some information useful to DOE in its recovery of IAEA categories 1 and 2 sources. However, since we found that only 1.5 percent of the sources recovered by DOE as of June 7, 2005, were in these two categories, it appears that the national source tracking system would yield little, if any, practical benefits to DOE. 2. NRC stated that requiring the reporting of certain information that our report asserts DOE would find useful, such as frequency of source use, could be extremely burdensome on licensees and NRC and would yield little, if any, practical benefits. NRC provided no support for this contention or for why it cannot overcome these burdens as it has done in justifying the reporting requirements proposed for licensees possessing IAEA categories 1 and 2 sources. In addition, NRC stated in its notice of proposed rulemaking for the national source tracking system that most licensees already have systems in which information on sources is maintained, and that NRC’s tracking system is designed to ease the reporting burden for these licensees. As to the comment on the practical benefit of tracking the use of high-risk radioactive materials, our report notes that the most vulnerable sources to abandonment, misplacement, and theft are those that are unwanted and in storage. Therefore, it seems reasonable to attempt to collect some information on frequency of source use, particularly if the storage of sources were to increase in the future in the absence of a commercial disposal option for the higher radioactive non-GTCC waste. 3. NRC commented that our report did not accurately characterize some issues involving IAEA category 3 sources, mainly regarding our claim that IAEA-TECDOC-1388 suggested that category 3 sources be tracked. NRC claimed that the IAEA document did not make this suggestion and provided some passages from the document to support its position. We believe that NRC’s comments in this regard reflect a narrow view of the guidance provided by IAEA. For example, in IAEA’s discussion of disused (unwanted) sources in this technical document, it clearly suggests a need to identify these sources and to gather information on their frequency of use. “Disused sources represent the largest pool of vulnerable and potential orphan sources. History has shown that many accidents involving orphan sources come about because sources that are no longer in use are eventually forgotten, with subsequent loss of control years later. To this end, it is beneficial from both a safety and security viewpoint for all disused sources to be identified and to undergo proper disposition…. Licensees are discouraged from proper disposal of disused sources by the cost involved, by the bureaucracy of doing so, or by the lack of an available disposal option…. It is clear that information needs to be gathered by those developing a national strategy regarding the status of at least all Category 1, 2 and 3 sources on the licensee’s inventory or national registry so that appropriate decisions can be made regarding them. Generally, this will involve asking the licensee or owner of the source about its frequency of use .” 4. In support of its decision not to track IAEA category 3 sources at this time, NRC drew attention to its other regulatory efforts, especially its new security orders for some licensees that possess IAEA categories 1 and 2 sources. NRC stated that, where appropriate, these security orders address aggregation of any sources (IAEA category 3 sources and below) such that the net result could reach the category 2 source threshold in a given physical location. Despite these security orders, NRC’s source tracking system would not include IAEA category 3 sources and below. However, NRC stated in its notice of proposed rulemaking for the national source tracking system, that it is seeking comments on the inclusion of IAEA category 3 sources in the registry because licensees possessing large quantities of these sources could present a security concern. 5. NRC pointed out that, as we reported, the actions it is taking to track IAEA categories 1 and 2 sources are consistent with the IAEA Code of Conduct and the Energy Policy Act of 2005. However, NRC failed to mention, as we do in our report, that this legislation also directs NRC to chair an interagency task force to provide a report, within 1 year, to the Congress and the President with recommendations for, among other things, additional radiation sources that should be required to be secured as well as any modifications necessary to the national source tracking system. We believe that our report provides ample support for areas where NRC, in collaboration with DOE and other federal agencies, might consider modifying the design of the national source tracking system to better assist DOE in planning and budgeting for the recovery and disposal of unwanted sealed radiological sources. 6. NRC commented that it does not matter that almost all of the sources that DOE has recovered are below IAEA categories 1 and 2 sources because, according to NRC, the greatest risk from a source is its radioactivity level. The radioactivity of an individual source is clearly one measure of its potential safety and security risk. However, as our report notes, DOE’s recovery efforts, often at the request of NRC, are not solely dictated by the radioactivity of an individual source, but more broadly the health, safety, security, or environmental threat posed by the aggregated radioactivity of many unwanted sources that are typically in storage at licensee sites around the country. Our report also notes that unwanted sources in storage tend to be the most vulnerable to abandonment, misplacement, and theft despite requirements that licensees keep track of the radioactive materials they possess. Some of the lesser radioactive sources are frequently found by DOE to be kept in quantities where their aggregated radioactivity would be equivalent to an IAEA category 2 source that would present a security concern. These lesser radioactive sources also may be more susceptible to inadvertent loss, which has already led in some cases to radiation exposure, high decontamination costs, and public panic. IAEA acknowledged in its Code of Conduct that its categorization of high-risk radiological sources is based on health effects and does not fully take into account the range of impacts that could result from accidents or malicious acts involving radioactive sources. 7. NRC stressed in its comments that DOE, through its representatives on NRC working groups and committees developing the national source tracking system, has had the opportunity to provide input on the design of the system and the potential usefulness of the system to assist its source recovery efforts. Regardless of DOE’s opportunities to provide input to NRC, DOE officials raised concerns to us during the course of our work about the usefulness of NRC’s source tracking system. Furthermore, in commenting on our draft report, DOE stated that there is a need for a more rigorous national-level tracking capability to assist the department in identifying and recovering unwanted sources. We incorporated technical changes in this report, where appropriate, on the basis of detailed comments provided by both agencies. We will send copies of this report to the appropriate congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at ((202) 512-3841 or at aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In our review, we examined (1) the status of the Department of Energy’s (DOE) efforts to recover unwanted sealed radiological sources and develop a disposal option for greater-than-class C (GTCC) waste, (2) DOE actions taken to recover and dispose of unwanted non-GTCC waste from sealed radiological sources, and (3) the extent to which DOE can identify and track unwanted sealed radiological sources for recovery and disposal. To better understand these issues, we met with officials at DOE, the National Nuclear Security Administration, and the Nuclear Regulatory Commission (NRC), and we visited the office of DOE’s source recovery project at Los Alamos National Laboratory and observed laboratory personnel recovering unwanted sealed radiological sources from a university. We also interviewed officials at nonfederal organizations, including the Health Physics Society, the Organization of Agreement States, and the Conference on Radiation Control Program Directors (CRCPD), as well as some recognized experts in the field. We also met with representatives from commercial entities that are licensed to possess high-risk radioactive sources and state regulatory officials in California and Ohio. More specifically, to examine the status of DOE efforts to recover unwanted sealed radiological sources and develop a disposal option for GTCC waste, we interviewed DOE officials from the U.S. Radiological Threat Reduction Program, Office of Environmental Management, and Office of Security. We reviewed applicable statutes, regulations, and agency guidance as well as relevant DOE and NRC studies, reports, documents, and agency plans. We obtained information from the source recovery project inventory database to determine the number and type of sources recovered as of June 7, 2005. To determine the reliability of these data, we first asked officials a series of data reliability questions that addressed areas such as data entry, data access, quality control procedures, and data accuracy and completeness. We also inspected data records, reviewed manuals and documents relating to DOE data collection and verification methods, and interviewed DOE officials. We asked follow-up questions as necessary. In consultation with a GAO expert in research methodology, we analyzed the officials’ responses for relevant weaknesses in data reliability that would make their data unusable for our analysis and reporting purposes. On the basis of these efforts, we determined that these data were sufficiently reliable for summarizing volumes of recovered sealed radiological sources. We also sought a better understanding of how sealed radiological sources are classified as waste. We developed a structured interview guide to collect information from commercial waste brokers that possess GTCC and non-GTCC waste from sealed radiological sources. This interview guide asked questions on areas such as the wastes these brokers often collect and the potential waste classes of common types of sealed radiological source devices. Because the practical difficulties of developing and administering a structured interview guide may introduce errors—resulting from how a particular question is interpreted, for example, or from differences in the sources of information available to respondents in answering a question—we included steps in the development and administration of the structured interview guide for the purpose of minimizing such errors. We pretested the instrument with three commercial waste brokers by telephone and modified it as appropriate to reflect questions and comments received during the pretests. To determine which commercial waste brokers to interview, we first used a list of commercial waste brokers compiled by CRCPD’s National Orphan Radioactive Material Disposition Program. This list contained 18 waste brokers that met the CRCPD criteria of being in good standing with CRCPD and serving more than 1 million customers, serving non-DOE customers, or serving more than one state. However, because this list is not comprehensive and there is no single source listing of commercial waste brokers, we also asked each broker we interviewed for the names of additional brokers who could provide useful information or insights into these issues. We continued this expert referral technique until the references we received became repetitive. In all, we used our structured interview guide to interview a nonprobablility sample of 12 commercial waste brokers in various geographical locations. We then used the results of these structured interviews to create a table summarizing common sealed radiological sources devices and their potential waste class (see app. II). We shared preliminary drafts of this table with experts at DOE and NRC and with leading scientists in the field of sealed radiological source security from nonfederal organizations, such as the Monterrey Institute of International Studies and the Low-Level Waste Forum. We received and incorporated their comments as appropriate on the structure and contents of the table. On the basis of this process, we determined that these data were sufficiently reliable for the purposes of this report. To examine the actions DOE has taken to recover and dispose of unwanted sealed radiological sources, we interviewed source recovery project personnel and officials from the U.S. Radiological Threat Reduction Program. We also conducted interviews with representatives from nonfederal entities, including the Monterrey Institute of International Studies, the Health Physics Society, CRCPD, the National Research Council, and the Council on Foreign Relations. We discussed with these agency officials and representatives the likelihood of DOE’s needing to recover more non-GTCC waste from unwanted sealed radiological sources in the future if the Barnwell, South Carolina, disposal site restricts access for licensees in 36 states by mid-2008 as planned. To obtain a better understanding of how much non-GTCC waste might be stored if licensees in these states are denied disposal access for this waste, we gathered information on the quantity of non-GTCC waste disposed at the two commercial disposal sites that can accept classes B and C waste in Richland, Washington, and Barnwell, South Carolina, between 2001 and 2004. To determine the reliability of these data, we first asked disposal operators a series of data reliability questions that addressed specific areas, such as data entry, data access, quality control procedures, and data accuracy and completeness. We added follow-up questions as necessary. In consultation with a GAO expert in research methodology, we analyzed their responses for relevant weaknesses in data reliability that would make their data unusable for our analysis and reporting purposes. On the basis of these efforts, we determined that these data were sufficiently reliable for summarizing volumes of disposed waste at these disposal sites. To determine the extent to which DOE can identify and track unwanted sealed radiological sources for recovery and disposal, we interviewed DOE and NRC officials regarding the scope, capabilities, and limitations of their existing databases for tracking these sources. We reviewed past estimates of the number of sealed radiological sources in the United States, including the scope and methodologies used to create these estimates. To examine NRC efforts to develop a national source tracking system for certain sealed radiological sources, we interviewed NRC and DOE officials who participated in the system’s initial formulation. We reviewed planning and management documents, including related NRC submissions to the Office of Management and Budget, NRC’s business case analyses, and the proposed rule for implementing a national source tracking system. We also reviewed the survey instrument NRC used to populate the interim database. Finally, we interviewed state officials from Illinois, New York, Ohio, and Oregon to determine whether any states currently track sealed radiological sources and gathered these officials’ views on the need for a national source tracking system. We conducted our review between June 2004 and September 2005 in accordance with generally accepted government auditing standards. Table 4 presents selected common devices that utilize sealed radioactive sources and the NRC waste classes in which sources from these devices might be disposed. This table shows the variability in the possible sources used in devices, their relative risks according to the International Atomic Energy Agency (IAEA) categorization scheme, and the range of waste classes associated with the sources that could be used in these devices. The radionuclides and the ranges of radioactivity listed next to each device are presented for illustrative purposes—each device might use one of these radionuclides in one or more sources. The IAEA source category corresponds to each radionuclide and radioactivity range, based on an IAEA technical document, as noted. The potential waste classes are associated with each device and not with the specific radionuclides that might be in these devices. In other words, not all radionuclides that could be used in a source within a device produce the range of waste classes associated with the device. In addition to the person named above, Casey Brown, Ryan Coles, John Delicath, Daniel Feehan, Doreen Feldman, Susan Iott, Thomas Laetz, Cynthia Norris, Anthony Padilla, Judy Pagano, Leslie Pollock, and Barbara Timmerman made key contributions to this report. | Concerns remain over the control of sealed radiological sources, widely used in many industrial and medical devices and applications. The Nuclear Regulatory Commission (NRC), the Department of Energy (DOE), and states have responsibilities for ensuring the safe and secure use and eventual disposal of these sources as low-level radioactive wastes. DOE must ensure disposal availability for greater-than-class C (GTCC) waste; states must do so for non-GTCC waste, that is, classes A, B, and C waste. NRC and DOE also collaborate to identify and recover unwanted sources that are not safe or secure. GAO examined DOE's (1) efforts to recover unwanted sources and develop a GTCC waste disposal option, (2) actions to recover and dispose of non-GTCC source waste, and (3) ability to identify sources for recovery and disposal. DOE has increased emphasis on its source recovery project and begun the process of identifying disposal options for GTCC waste. DOE transferred project responsibilities to another office that has given the project higher priority and accelerated DOE's recovery efforts. DOE exceeded an earlier goal for recovering sources and has now collected over 10,800 of them. This recovery has been facilitated by additional project funding support and DOE's resolving a shortage of storage space for certain sources. In May 2005, DOE issued a notice of intent to prepare an environmental impact statement to assess GTCC waste disposal options; however, DOE has not yet determined when a disposal site might be made available. DOE has expanded the scope of its recovery effort to include non-GTCC waste from sealed radiological sources, a change that could increase DOE expenditures. DOE recovered and commercially disposed of 443 of these sources from a bankrupt firm, at a cost to DOE of about $581,000. Given that unwanted sources in storage present higher vulnerabilities, DOE might need to recover more of them in the future if the commercial disposal site that currently accepts this non-GTCC waste from most states ceases to do so as planned in 2008. Lacking a commercial disposal option, DOE anticipates storing this waste, rather than disposing of it at DOE sites, because, among other reasons, it does not want to undermine the responsibility the Congress gave the states to provide disposal availability for non-GTCC waste. DOE lacks information that would assist its efforts to identify and recover unwanted sealed radiological sources that may pose a safety and security risk. DOE has useful information on the sources in its possession, including recovered sources. However, DOE does not know how many sources might need recovery and how much disposal capacity is needed for GTCC waste. NRC is developing a national source tracking system that would not be useful for DOE's source recovery efforts because it is only designed to track individual sources with high radioactivity. According to DOE, nearly all of the sites where it has recovered sources contained individual sources with lesser radioactivity than would be tracked by NRC, but their combined radioactivity posed enough of a risk to warrant their recovery by DOE. |
The Congress made the eligibility criteria for children to receive SSI more restrictive in order to help ensure that only needy children with severe disabilities are eligible for benefits. From the end of 1989 through 1996, the number of children younger than 18 receiving SSI had more than tripled, from 265,000 to 955,000. This growth occurred after SSA initiated outreach efforts and issued two sets of regulations that made the eligibility criteria for children less restrictive, particularly for children with mental impairments. One regulatory change, issued in December 1990, revised and expanded SSA’s medical listings for childhood mental impairments by adding such impairments as attention deficit hyperactivity disorder and incorporating functional criteria into the listings. Examples of such functional criteria include standards for assessing a child’s social skills; cognition and communication skills; and the ability to concentrate, keep pace, and persist at tasks at hand. The medical listings are regulations containing examples of medical conditions, including both physical and mental impairments, that are so severe that disability can be presumed for anyone who is not performing substantial gainful activity and who has an impairment that “meets” the criteria—medical signs and symptoms and laboratory findings—of the listing. Since the listings cannot include every possible impairment or combination of impairments a person can have, SSA’s rules also provide that an impairment or combination of impairments can “equal” or be “equivalent to” the severity of a listing. There are separate listings for adults and children. The childhood listings are used first in evaluating childhood claims. If the child’s impairment does not meet or equal the severity of a childhood listing, the adult listings are considered. The second regulatory change, issued in February 1991 in response to the Sullivan v. Zebley Supreme Court decision, added two new bases for finding children eligible for benefits, both of which required an assessment of a child’s ability to function: functional equivalence, which was set at “listing level” severity, and an individualized functional assessment (IFA), which was set at a lower threshold of severity. Functional equivalence is based on the principle that it is the functional limitations resulting from an impairment that make the child disabled, regardless of the particular medical cause. It was added as a basis for eligibility in response to the Supreme Court’s determination in the Zebley case that SSA’s medical listing of impairments—which had been the only basis for eligibility—was incomplete. Under functional equivalence, a child could be found eligible for benefits if the child’s impairment limited his or her functional ability to the same degree as described in a listed impairment. Functional equivalence is particularly appropriate for assessing children with combinations of physical and mental impairments. The IFA allowed children whose impairments were less severe than listing level to be found eligible if their impairments were severe enough to substantially limit their ability to act and behave in age-appropriate ways. A child was generally found eligible under the IFA if his or her impairment resulted in moderate functional limitations in three areas of functioning or a marked limitation in one area and a moderate limitation in another area. In 1995, we reported that the subjectivity of the IFA called into question SSA’s ability to ensure reasonable consistency in administering the SSI program, particularly for children with behavioral and learning disorders. We suggested that the Congress consider eliminating the IFA and directing SSA to revise its medical listings. Several welfare reform provisions enacted in August 1996 made the eligibility criteria for disabled children more restrictive: (1) childhood disability was redefined from an impairment comparable to one that would prevent an adult from working to an impairment that results in “marked and severe functional limitations,” (2) the IFA was eliminated as a basis for determining eligibility for children, and (3) maladaptive behavior was removed from consideration when assessing a child’s personal or behavioral functioning. Thus, such behavior would be considered only once—in the assessment of that child’s social functioning—when determining whether the child had a mental impairment severe enough to meet or equal the medical listings. The law also required SSA to redetermine the eligibility of children on the rolls who might not meet the new eligibility criteria because they received benefits on the basis of the IFA or maladaptive behavior. Earlier legislative proposals under consideration in 1995 might have removed from the rolls as few as 45,000 to as many as 190,000 children, according to Congressional Budget Office (CBO) estimates. After the welfare reform legislation was enacted in August 1996 but before SSA issued its regulations, CBO estimated that about 170,000 children on the rolls would no longer be eligible for benefits. After SSA issued its regulations in February 1997, CBO and SSA estimates of children who would be removed from the rolls were very close—131,000 and 135,000, respectively. SSA identified 288,000 children as potentially affected by the changes in the eligibility criteria because they had been awarded benefits on the basis of the IFA or maladaptive behavior. Through February 28, 1998, SSA reviewed the eligibility of 272,232 of the 288,000 children. Of these, 139,693 (51.3 percent) were found eligible to continue to receive benefits and 132,539 (48.7 percent) were found ineligible. Because the number of children deemed ineligible does not yet reflect the results of all appeals, we do not yet know the final outcome on all these cases. Children initially deemed by a disability determination service to be ineligible have 60 days to request reconsideration of their case. If they continue to receive an unfavorable result, they can appeal to an SSA administrative law judge and, finally, to federal court. Recipients can elect to continue receiving benefit payments during the appeal process. Factoring in appeals and experience in conducting redeterminations so far, SSA now estimates that 100,000 children will be removed from the rolls as a result of the redeterminations. In December 1997, SSA issued a report on its “top-to-bottom” review of the implementation of the new regulations to address concerns that children may have had their benefits terminated unfairly. SSA found problems with the adjudication of claims for which mental retardation was the primary impairment as well as potential procedural weaknesses relating to notification of appeal rights and termination of benefits for failure to cooperate with SSA requests for information needed to redetermine eligibility. To remedy these problems, SSA decided to rereview all children whose benefits were terminated or denied on the basis of mental retardation. SSA conducted training in March 1998 to clarify how these claims should be adjudicated. Also, all cases terminated because families did not cooperate with SSA in processing the claim, such as by failing to provide requested medical information or to take the child for a consultative examination, will be rereviewed. SSA found that in two-thirds of these terminations, all the required contacts had not been made or had not been documented in the file. Finally, families of children whose benefits were terminated but did not appeal are being given an additional 60-day period in which to appeal their terminations. Notices of this right as well as the right to continue to receive benefits while the appeal is pending were sent out in February 1998. area. SSA also eliminated the IFA and removed the duplicate consideration of maladaptive behavior from the mental disorders listings. In developing its regulations, SSA concluded that the Congress meant to establish a stricter standard of severity than “one marked, one moderate” limitation, for several reasons. The Congress eliminated the “comparable severity” standard of disability and the IFA, which was created for evaluating impairments less severe than those in the medical listings. A “one marked, one moderate” standard of severity would have retained one of the standards under which children were found eligible under the IFA, which SSA stated would violate the law. Finally, SSA interpreted the conference report to mean that the Congress intended the listings to be the last step in the disability determination process for children. Although SSA articulated the “two marked or one extreme” severity standard in its regulations, it did not modify its existing listings to specifically incorporate functional criteria that would reflect both the new definition of childhood disability and advances in medicine and science. For example, because of advances in treatment, some impairments no longer have as severe an effect on a child’s ability to function as they once did. As a result, some listings are set below the “two marked or one extreme” threshold of severity, and cases are being adjudicated at this less severe level as well as at the “two marked or one extreme” severity level. SSA’s Office of Program and Integrity Reviews have told us, however, that they would consider this an error. SSA has not identified how many children may have been awarded benefits on the basis of these less severe listings. SSA told us that unreliable coding of the listings used to determine eligibility makes it difficult to quantify the extent of this problem. We do know, however, that some of the listings below the “two marked or one extreme” threshold are for prevalent impairments, including two of six listings for the most common impairment—mental retardation—and three listings for cerebral palsy, one for epilepsy, and one for asthma. Other listings below the “two marked or one extreme” threshold include one listing for juvenile rheumatoid arthritis, one for juvenile diabetes, and two for diabetes insipidus. As of June 1998, SSA had not established a schedule for updating and modifying its listings. SSA’s quality assurance statistics on childhood cases show uneven accuracy rates across the states. Although nationally the accuracy rate for decisions on new childhood cases and redeterminations exceeds SSA’s standard of 90.6 percent, many states fall below the standard. Specifically, for decisions made on new childhood cases from June 1997 through February 1998, 5 states fell below the 90.6-percent accuracy standard for awards, and 9 states fell below the standard for denials. For redeterminations, 10 states fell below the standard for continuances, and 10 states fell below the standard for cessations. Most of the errors have been in the documentation; that is, there was some deficiency in the evidence that formed the basis for the determination. In these cases, proper documentation of the case could substantiate or reverse the decision. Given the significant changes in adjudicating cases on the basis of the new regulations, these statistics are not surprising. Moreover, childhood cases historically have been among the more difficult cases to adjudicate. We would expect SSA to be monitoring the decisions; identifying areas of difficulty for adjudicators; and providing additional clarification, guidance, and training to improve the accuracy of decisions. In fact, this is exactly what SSA has been doing, although its training schedule was delayed slightly. consistent application of the new regulations. The plan includes special initiatives to ensure the quality of cases readjudicated in response to the top-to-bottom review, as well as initiatives to improve SSA’s ongoing quality assurance reviews on childhood cases. For the first time, SSA will be drawing separate samples of new childhood claims and continuing disability reviews. This should allow SSA to provide more timely feedback and policy clarifications on the problems unique to adjudication of childhood claims. SSA also will be measuring the performance of its quality reviewers to ensure that they are accurately and consistently identifying errors. Under this effort, SSA plans to increase its sample of reviewed cases from 1,600 to 6,000 annually. SSA has made substantial progress in implementing the new childhood definition of disability through its rapid redetermination of most of these cases, its action to ensure that the redetermination process is fair, and its ongoing review of the implementation of the new regulations. However, we remain concerned about how accurately and consistently the disability determination process is working for children. Specifically, because some of SSA’s listings of impairments require less than “two marked or one extreme” limitation to qualify for benefits, SSA adjudicators are not assessing all children against a uniform severity standard. This is because SSA has neither updated its listings to reflect advances in medicine and science nor modified them to reflect a single standard of severity, despite its authority to do so. Moreover, we noted the need to revise the listings 3 years ago. SSA also needs to continue its efforts to improve decisionmaking for childhood cases to better ensure that adjudicators apply the new eligibility criteria accurately and consistently. In view of the fact that many of SSA’s medical listings for children are outdated and allow eligibility to be based upon multiple standards of severity, our May 1998 report recommended that the Commissioner act immediately to update and modify its medical listings to incorporate advances in medicine and science and to reflect a uniform standard of severity. In its comments on our report, SSA officials agreed that SSA should periodically update its listings and stated that it is developing a schedule to accomplish this. The agency stated that it must consult with medical experts to ensure that the listings reflect state-of-the-art medical practice and estimates that it will take several years to complete the revision. However, the agency did not address the need for the listings to reflect a uniform severity standard. We will continue to monitor SSA’s implementation of the new eligibility criteria, including the agency’s actions to update its medical listings for children, as part of our mandate to report to the Congress by 1999 on the impact of the changes to the SSI program enacted by welfare reform. As part of that effort, we are monitoring what SSA is doing to ensure the accuracy and consistency of childhood disability decisions made under the new eligibility criteria. Please contact me on (202) 512-7215 if you have questions about the information presented in this statement. Supplemental Security Income: SSA Needs a Uniform Standard for Assessing Childhood Disability (GAO/HEHS-98-123, May 6, 1998). SSA’s Management Challenges: Strong Leadership Needed to Turn Plans Into Timely, Meaningful Action (GAO/T-HEHS-98-113, Mar. 12, 1998). Supplemental Security Income: Review of SSA Regulations Governing Children’s Eligibility for the Program (GAO/HEHS-97-220R, Sept. 16, 1997). Children Receiving SSI by State (GAO/HEHS-96-144R, May 15, 1996). SSA Initiatives to Identify Coaching (GAO/HEHS-96-96R, Mar. 5, 1996). Supplemental Security Income: Growth and Changes in Recipient Population Call for Reexamining Program (GAO/HEHS-95-137, July 7, 1995). Social Security: New Functional Assessments for Children Raise Eligibility Questions (GAO/HEHS-95-66, Mar. 10, 1995). Social Security: Federal Disability Programs Face Major Issues (GAO/T-HEHS-95-97, Mar. 2, 1995). Supplemental Security Income: Recent Growth in the Rolls Raises Fundamental Program Concerns (GAO/T-HEHS-95-67, Jan. 27, 1995). Social Security: Rapid Rise in Children on SSI Disability Rolls Follows New Regulations (GAO/HEHS-94-225, Sept. 9, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Social Security Administration's (SSA) implementation of the new eligibility criteria for childhood disability benefits under the Supplemental Security Income (SSI) program. GAO noted that: (1) SSA has made considerable progress in implementing the welfare reform changes in eligibility for SSI children; (2) SSA has taken important steps to safeguard fairness by identifying children whose benefits may have been terminated inappropriately and establishing remedial action to rereview their cases; (3) however, because SSA's medical listings reflect multiple levels of severity, SSA also needs to expedite updating and modifying its medical listings to ensure that all children are assessed against a uniform severity standard; (4) the need to revise the listings is a long-standing problem that GAO reported on in 1995; (5) moreover, SSA needs to take concerted action to follow through on its plan for monitoring and continually improving the quality of decisions regarding children; and (6) consistent with a legislative mandate, GAO will continue to focus its work on SSA's efforts to provide reasonable assurance that it can administer the program consistently and improve the accuracy of childhood disability decisions. |
In 1997 and 1999, we reported that INS was implementing its border strategy generally as planned. The strategy called for concentrating personnel and technology in a four-phased approach, starting first with the sectors with the highest levels of illegal immigration activity (as measured by apprehensions) and moving to the areas with the least activity. The four phases of the strategy called for allocating additional Border Patrol resources to sectors along the border in the following order: (1) Phase I: San Diego, CA, and El Paso, TX, sectors; (2) Phase II: Tucson, AZ, sector and three sectors in south Texas—Del Rio, Laredo, and McAllen; (3) Phase III: the remaining three sectors along the Southwest border; (4) Phase IV: the Northern border, Gulf Coast, and coastal waterways. The Southwest border, which has been the focus of INS’ buildup in Border Patrol resources to date, represents 9 of the Border Patrol’s 21 sectors nationwide (see fig. 1). The strategy’s objectives are to (1) close off the routes most frequently used by smugglers and illegal aliens (generally through urban areas) and (2) shift traffic to ports of entry, where travelers are inspected, or to areas that are more remote and difficult to cross. With the traditional crossing routes disrupted, INS expected that illegal alien traffic would either be deterred or forced over terrain less suited for crossing, where INS believed it would have the tactical advantage. INS’ Border Patrol is responsible for preventing and detecting illegal entry along the border between the nation’s ports of entry. To carry out the strategy, the Border Patrol was to concentrate personnel and resources in a four-phased approach, starting with the areas of highest illegal alien activity; increase the time Border Patrol agents spend on border control activities; make maximum use of physical barriers; and identify the appropriate quantity and mix of personnel and technology needed to control the border. The Border Patrol’s fiscal year 2001 budget is about $1.2 billion, a 9- percent increase over its fiscal year 2000 budget of about $1.1 billion. As of September 30, 2000, there were 9,096 Border Patrol agents nationwide; 8,475, or 93 percent, were located in the nine sectors along the Southwest border. INS’ phased approach to implementing its strategy has included several operations in which INS allocated additional Border Patrol agents and other resources—such as fencing, lighting, night vision scopes, sensors, cameras, vehicles, and aircraft—to targeted locations along the Southwest border. In October 1994, the Border Patrol launched Operation Gatekeeper in its San Diego sector. Initially, the operation focused enforcement resources along the 5 miles that at that time accounted for nearly 25 percent of all illegal border crossings nationwide. Since then, the sector has expanded Gatekeeper to include the entire 66 miles of border under the sector’s jurisdiction. In 1994, the Border Patrol began Operation Safeguard in the Tucson sector. Initially, the operation focused enforcement resources in the Nogales, AZ, area. Since then, the sector has expanded operations to the Douglas and Naco, AZ, area to respond to the increase in apprehensions in that area. In August 1997, INS launched Operation Rio Grande in the Rio Grande Valley area in south Texas. The Border Patrol focused enhanced resources in the McAllen and Laredo, TX, sectors. In fiscal year 1998, the Border Patrol extended Operation Gatekeeper to the El Centro sector in California’s Imperial Valley, east of San Diego. This was done to respond to the increase in illegal alien traffic in that area and to target the alien smuggling rings that moved there after the Border Patrol increased its presence in San Diego. INS has reported that each of these initiatives reduced the number of alien apprehensions in some of the targeted areas. INS’ apprehension statistics have been its primary quantitative indicator of the results of the strategy. INS anticipated that the following changes, among others, would provide evidence of the interim effectiveness of the strategy: Locations receiving an infusion of resources would experience an initial increase in the number of illegal alien apprehensions, followed by a decrease in apprehensions when a “decisive level of resources” had been achieved. Illegal alien traffic would shift from sectors that traditionally accounted for most illegal immigration activity toward other sectors. One of the major technological initiatives deployed along the Southwest border has been IDENT, INS’ automated biometric identification system, which captures apprehended aliens’ fingerprints, photos, and biographical data, as well as information on the date and location of the apprehension. IDENT was developed to help INS determine whether an apprehended alien is an aggravated felon, smuggler, or repeat illegal crosser. Since fiscal year 1995, INS has deployed the system incrementally along the Southwest border, and it is now deployed in all Border Patrol stations within the nine Southwest border sectors. INS spent about $34 million on IDENT development and deployment through fiscal year 2000. To address our three objectives, we (1) analyzed Border Patrol staffing and workload data; (2) reviewed INS’ strategy, INS planning documents, and reviews of INS’ Annual Performance Plans; (3) interviewed INS officials at Border Patrol headquarters in Washington, D.C., and in the San Diego, El Centro, Yuma, Tucson, and Del Rio sectors; (4) interviewed local officials in Calexico, CA; Yuma, Douglas, Santa Cruz County, and Pima County, AZ; and Cameron County and Eagle Pass, TX; (5) interviewed the Mexican Consuls General in Nogales and Douglas, AZ; and (6) held a group discussion with members of the Citizens Advisory Group to the local Border Patrol station in Douglas, AZ. We chose these locations because, except for San Diego, Border Patrol apprehensions in these areas increased as INS implemented its strategy. We also reviewed statistics on migrant deaths and studies on Operations Gatekeeper and Rio Grande that were prepared by an INS contractor. Finally, we observed border enforcement activities in the El Centro, Yuma, and Tucson sectors. We conducted our work between October 2000 and June 2001 in accordance with generally accepted government auditing standards. As INS continues to implement the second phase of its four-phased strategy, its preliminary estimates show that it may need 3,200 to 5,500 more agents, additional support personnel, and hundreds of millions of dollars in additional technology and infrastructure to fully implement the Southwest border strategy. Since fiscal year 1998, INS has been implementing the second phase of its four-phased approach, which called for primarily increasing resources in the Tucson sector and the three sectors in south Texas—Del Rio, Laredo, and McAllen. In accordance with the strategy, INS allocated 1,140 (80 percent) of the additional 1,430 agent positions authorized in fiscal years 1999 and 2000 to these sectors. The strategy noted that Border Patrol needed to be flexible in responding to changing patterns in illegal traffic. Consequently, INS added some of the additional enhancements in fiscal years 1999 and 2000 to the Yuma and El Centro sectors, scheduled for phase III, in order to respond to the shifts in illegal alien traffic to those sectors. Onboard strength in all nine sectors along the Southwest border increased by 1,183 agents (16 percent) to almost 8,500 between fiscal years 1998 and 2000. As shown in table 1, INS has added over 5,000 agents to sectors along the Southwest border since fiscal year 1993, the year preceding the initial implementation of the strategy. This represents a 150-percent increase between fiscal years 1993 and 2000 in the total number of onboard agents in the nine sectors along the Southwest border. (App. I, table 3, provides additional information on Border Patrol agent enhancements along the Southwest border.) As a result of the increased number of agents along the Southwest border, the amount of time spent on border enforcement activities in these sectors increased by 27 percent, from about 8.5 million hours in fiscal year 1998 to almost 11 million hours in fiscal year 2000. The proportion of time Border Patrol agents spent on border enforcement increased from 66 percent to 69 percent during this time. INS has continued to erect barriers as called for in its strategy. Since fiscal year 1999, INS has completed about 12 miles of fencing and other types of barriers, bringing the total to about 76 miles along the Southwest border as of May 2001. INS had plans to erect an additional 32 miles, some of which was under construction as of May 2001. In addition, in fiscal years 1999 and 2000, INS installed 107 remote video surveillance systems along the Southwest border bringing the total to 130. According to INS’ year-end review of its fiscal year 2000 Annual Performance Plan, INS estimated it may need between 11,700 and 14,000 agents to fully implement the Southwest border strategy. This is between 3,200 and 5,500 more agents than the roughly 8,500 agents INS had on board along the Southwest border at the end of fiscal year 2000. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 mandated that the Attorney General increase the number of agents on board by no less than 1,000 agents per year during each of fiscal years 1997 through 2001. INS was able to meet this goal in fiscal years 1997 and 1998, but not in the following 3 years. We reported that in fiscal year 1999, INS was only able to achieve a net increase of 369 agents out of the goal of 1,000 because INS was unable to recruit enough qualified applicants and retain them through the hiring process. In fiscal year 2000, INS stated that it requested no additional agents because of its concern that the ratio of inexperienced-to-experienced agents was getting too high and law enforcement experts said this was risky. Congress, however, funded 430 additional agents. In fiscal year 2001, INS requested 430 agents. In her March 2000 testimony, the former INS Commissioner stated that the 430 agents represented the level that was achievable in the existing tight labor market. It also allowed INS to have sufficient funds to increase the journeyman level from a GS-9 to a GS-11 and for signing bonuses for those who successfully completed the Border Patrol Academy training program. It would take between 5 and 9 years and congressional approval for INS to obtain the additional Border Patrol agents it believes it needs to control the Southwest border. As noted above, INS estimates it needs between 3,200 and 5,500 more agents than the roughly 8,500 agents it had on board along the Southwest border at the end of fiscal year 2000. INS plans to hire 430 agents and reach an onboard agent strength of about 8,900 agents by the end of fiscal year 2001. The President’s fiscal year 2002 budget requests 570 Border Patrol agents per year in 2002 and 2003. If the growth rate of the Border Patrol continued to be 570 agents per year beyond 2003, INS would reach the lower limit of the number of agents it believes it needs in 2006 and the upper limit in 2010, assuming that all of the new agents would be assigned to the Southwest border. INS’ April 2000 Border Patrol Technology Plan outlines a 5-year plan for adding new technology along both the Northern and Southern borders. According to an INS official, Southwest border sectors have requested new technology estimated to cost roughly between $450 million and $560 million, nearly all of it for about 1,100 remote video surveillance systems. INS is also developing sector-level, integrated border infrastructure plans (e.g. barriers, roads, and lighting) for each Southwest border sector. The plan also states that INS will need additional support personnel, such as Law Enforcement Communications Assistants to monitor the cameras and technicians to repair cameras and other equipment. INS will need to construct additional space to house both the additional equipment and personnel. In May 2001, INS budget officials told us that they estimated it might take between 7 and 10 years to deploy the additional staff and equipment INS believes it needs for the Southwest border. In commenting on our draft report, INS’ Executive Associate Commissioner for Field Operations stated that the long–term resource requirements we discuss above are preliminary and subject to change. As the Border Patrol has increased enforcement in certain locations, illegal alien apprehensions have shifted to other locations, as the Border Patrol predicted would result from its strategy. However, until very recently, apprehensions borderwide continued to increase. The Border Patrol is attempting to supplement its apprehension data with additional indicators to measure the effectiveness of its border control efforts, but it could learn more about the results of its border control efforts if it capitalized on using the automated fingerprint data that it collects on apprehended illegal aliens. The shift in illegal alien apprehensions has had both positive and negative effects on local border communities. We reported in 1997 and 1999 that illegal alien apprehensions shifted as expected after INS allocated additional resources to targeted border sectors, such as El Paso and San Diego. This continued to occur, especially in San Diego. As shown in figure 2, apprehensions were notably lower in San Diego in fiscal year 2000 compared with fiscal year 1998. Apprehensions in El Paso were slightly lower in fiscal year 2000 than in fiscal year 1998. In the McAllen sector, as resources were applied in 1997, there was an initial increase in apprehensions in 1998, followed by a decline in apprehensions in fiscal year 2000. However, illegal alien apprehensions shifted to other sectors in fiscal year 1998, as indicated by the increased apprehension levels in the El Centro, Yuma, Tucson, Laredo, and Del Rio sectors. Although implementation of the strategy has shifted the areas in which illegal aliens are apprehended, total Border Patrol apprehensions along the Southwest border have increased overall since the strategy was implemented in 1994. Figure 3 shows the total number of apprehensions along the Southwest border, and table 4 in appendix I shows the apprehension numbers for each of the nine Southwest border sectors. Very recently, apprehensions have been declining. For the period January through April 2001, Border Patrol apprehensions along the Southwest border declined by 26 percent compared with the same period in fiscal year 2000. Although the reasons for the decline are unclear and it is too early to tell whether the decline will persist, INS and Mexican Consulate officials we spoke with as well as some researchers offered various theories, including the following: INS’ strategy is effectively deterring illegal entry. Substantially fewer Mexican illegal aliens went home for the holidays in December 2000 as a result of (1) legislation that enabled them to apply for permanent residency or (2) their believing that it would be too difficult to get back into the United States. Mexicans are more optimistic about the future in Mexico and less likely to migrate because of improvements in the Mexican economy and a change in the Mexican government. Prospects for finding employment in the United States have diminished with the slowing economy, so fewer aliens have attempted to enter illegally. Whether INS’ strategy has deterred illegal entry overall or whether it has merely shifted the traffic to different locations is unclear. INS has taken some steps to design an overall evaluation of the strategy’s effectiveness, and it has issued reports on the effects of Operations Gatekeeper and Rio Grande. Both these reports stated that the operations were successful in reducing illegal entry in the locations where INS had concentrated its enforcement resources. However, INS has not conducted a comprehensive, systematic evaluation of the strategy’s effectiveness in detecting and deterring aliens from entering illegally, as we recommended in our 1997 report. With no baseline data to compare results against and with the passage of 7 years since INS began implementing its Southwest border strategy, undertaking such an evaluation becomes increasingly difficult. By necessity, the evaluation would be a retrospective study that relied on available data rather than systematically gathered evaluation data (1) based on clearly defined indicators of the range of effects the strategy might have and (2) collected expressly to answer the research questions. As a result, what effect the strategy has had on overall illegal immigration along the Southwest border may never be fully known. The Government Performance and Results Act (GPRA) requires agencies to establish performance indicators to measure or assess the desired outcomes of their program activities. As a way of gauging the effectiveness of its strategy in deterring illegal entry, the Border Patrol is attempting to measure its effectiveness in apprehending aliens. For example, in certain locations, called corridors, the Border Patrol attempts to estimate the number of aliens who entered or attempted to enter illegally in a given time period. Border Patrol officials told us that agents count the number of (1) aliens they have physically observed crossing and those that have turned back and (2) aliens detected by video cameras and sensors. In addition, agents examine footprints along the border to estimate the number that may have crossed, a technique the Border Patrol calls sign- cut. The Border Patrol measures its effectiveness as the ratio of aliens arrested plus those that have turned back to the estimated number of illegal entries. INS officials told us that the effectiveness ratios only apply to areas where INS can monitor the border either electronically or by using agents. Because it is difficult to determine the accuracy or completeness of INS’ estimates of the number of aliens turned back and those entering illegally, we do not know how valid or generalizable INS’ effectiveness measures are. The Border Patrol began reporting the corridor effectiveness ratios through its annual performance plan review process in fiscal year 2001. For example, from October 2000 through March 2001, the effectiveness ratios in the 12 corridors in California and Arizona ranged from 37 percent in the west desert area in Tucson to 92 percent in west desert area of El Centro. We did not independently assess INS’ methodology for calculating this performance information. Department of Justice guidance on GPRA states that agencies should use a variety of indicators to evaluate program performance. In 1997, we reported that immigration researchers and INS officials stated that IDENT data, when more fully available, could be quite useful for examining the flow of illegal aliens across the border. For example, using IDENT data, INS could conduct a borderwide analysis of the number of individuals arrested attempting illegal entry; the number of times they have been arrested; and how these numbers have changed over time and by location. The results of such analyses could supplement the effectiveness ratios that INS currently calculates for GPRA reporting, and it could lead to a better understanding of the apprehension statistics that INS routinely reports. This is because the number of apprehensions—although frequently used as a proxy indicator for the magnitude of illegal alien traffic—provides information on INS arrests rather than on the number of different individuals arrested or the number of illegal aliens who eluded arrest by the Border Patrol. Analysis of the IDENT data offers the potential for better understanding the effects of INS’ enforcement operations on shifts in illegal alien traffic and for statistically modeling the flow of illegal aliens across the border and their probability of apprehension. According to the Director of INS’ Statistics Branch, the IDENT system is now at a point where meaningful analyses can be done for the period January 1998 to the present. INS’ border control efforts have resulted in some communities experiencing an unprecedented surge in illegal alien traffic. As shown in figure 4, apprehensions from fiscal year 1994 to 1998 increased over 10- fold in Calexico more than doubled in Nogales. In fiscal year 2000, apprehensions in these locations declined even with the addition of new Border Patrol agents, although apprehensions were still higher than in fiscal year 1994. In Douglas and Yuma, apprehensions continued to increase in fiscal year 2000 compared to fiscal year 1994, with Douglas experiencing an eightfold increase and Yuma experiencing a nearly sixfold increase. Apprehensions in Brownsville, TX, peaked in fiscal year 1996 and since then have been declining as border enforcement has increased there. According to INS officials, an increase in illegal alien traffic is more likely to occur in border communities that have the infrastructure—for example roads and housing—that facilitate aliens transiting through them. This is because aliens and alien smugglers use the network of roads leading to the border from Mexico as well roads leading away from the border once in the United States (see fig. 5). Also, smugglers need towns that have sufficient housing available to hide aliens from authorities, as well as access to vehicles to transport the aliens out of the area. The shift in illegal alien traffic to certain small, border communities has had varied effects on the communities, depending on such factors as the routes illegal aliens used to transit through them; the level of Border Patrol presence in specific locations; how much barrier fencing was in place; and how the community perceived the situation. For example, in Calexico, a border town approximately 125 miles east of San Diego with approximately 27,000 residents, local police officials told us they noted a significant increase in prowler calls and vehicle thefts as illegal alien traffic shifted from San Diego to Calexico. However, according to police officials, there was a drop in reported prowler incidents and auto thefts after INS added resources and completed erecting a fence in downtown Calexico in 1999. An official in Nogales, a border community of about 20,000 residents, told us that illegal immigration contributed to the city’s crime rate. According to the Santa Cruz county attorney, before the Border Patrol increased its presence in the downtown area, thieves would frequently cross illegally into the United States and steal items they could carry back into Mexico. When the Border Patrol increased resources and enforcement operations and built a larger and less penetrable fence in the downtown area, thefts along the border dropped. The county attorney attributed a 64-percent decline in the number of felony filings against Mexican illegal aliens between 1998 and 2000 to INS’ increased border control efforts. The county attorney also attributed improved business conditions in Nogales to the Border Patrol’s efforts to deter illegal aliens from entering in the downtown area. As apprehensions in downtown Nogales dropped, many small shops that benefited from the illegal alien trade closed. With these small, locally-owned shops going out of business, large national retailers began to locate in Nogales. Community residents and legal entrants from Mexico could now shop locally instead of having to travel to Tucson. However, the county attorney also stated that crimes against illegal aliens have increased because the migrants are forced to attempt entry into the United States through remote areas outside town, where criminal activity is less likely to be detected and more difficult to respond to. These crimes are difficult to prosecute because they typically involve Mexican nationals harming other Mexican nationals. Cases are difficult to make and prove because assailants are seldom captured, crime scenes in remote areas are rarely located, and victims disappear. The Mexican Consul General in Nogales told us that there is strong evidence that some alien smugglers work in collusion with border bandits who prey on the illegal aliens. Officials in Douglas, a small border community with about 14,000 residents about 125 miles east of Nogales, also told us about both positive and negative effects of the strategy. According to a city official, the additional Border Patrol agents assigned to the Douglas area have had a positive effect on the local economy; many agents live and shop in the community, and tax revenues are increasing. Illegal immigration in the downtown area has decreased with the Border Patrol’s increased presence and additional fencing. Residents stopped encountering agents chasing groups of 30 to 40 illegal aliens through town. A key reported negative effect was that illegal aliens were diverted to the rural area on the city’s outskirts and began to cross over private ranchland as the Border Patrol increased enforcement in the downtown area. Ranchers living in these areas told us that they have incurred economic losses because illegal aliens transiting their property have torn their fences and stolen fencing material, which has allowed their livestock to get loose. The ranchers also said that their livestock have been killed, personal belongings stolen, and ranches littered with trash. The large number of illegal crossers has reportedly ruined some grazing fields. According to Border Patrol officials, the increase in illegal alien traffic has increased tensions in the Douglas community. Some residents have grown frustrated with the large influx of illegal aliens and begun making citizen’s arrests of illegal aliens while patrolling their property with loaded weapons; at least two aliens have been shot. According to the city manager, the negative national publicity Douglas has received as a result of the increase in illegal alien traffic may have long-term detrimental effects on economic development. He expressed concern that tourists might no longer want to visit; businesses might not want to locate there; and it might be more difficult recruiting professionals, such as teachers and physicians. City officials believe that negative publicity about illegal immigration—the perception that the town was unsafe—might have been a factor in a company’s decision not to relocate to Douglas. The business would have employed 250 people. According to one Tucson sector official, Border Patrol officials had anticipated that the illegal alien traffic would shift to Douglas as the sector began increasing enforcement in Nogales. However, the sector did not have enough agents to simultaneously build up its agent resources in both Nogales and Douglas. In Yuma, a city of about 77,000 in western Arizona, city officials told us that unlike other border communities, the increase in apprehensions have not had negative effects on their community. They said that this was because illegal aliens use the town as a transit route to other parts of the United States and generally do not cross in populated areas. According to Border Patrol officials, most of the illegal alien apprehensions are made in the outskirts of the city on uninhabited public lands. Brownsville is the largest city in the lower Rio Grande Valley, with a population of about 140,000. According to an evaluation, before Operation Rio Grande began in August 1997, illegal immigration was having a significant, negative impact on Brownsville. According to the study, citizens reported routinely watching 100 to 200 illegal aliens enter the United States by crossing the Rio Grande River from Mexico and passing through a local golf course. Citizens also reported being harassed by Mexican youths who crossed the border and posed as street performers while panhandling, hustling, or causing trouble in downtown Brownsville. Shopkeepers reported two or three shoplifting incidents a day and complained that certain illegal aliens harassed them and their customers. The evaluation also quoted a police official as saying that there were nearly daily occurrences in which citizens at a local park near the Rio Grande River were accosted and frequently robbed by illegal aliens. As the Border Patrol increased its presence in the downtown area, the situation reportedly improved. According to the evaluation, as of January 2000, fewer illegal aliens attempted to enter the United States in Brownsville. Citizens reported that they were seeing about one alien a week crossing the river and passing through the golf course. Brownsville police and Border Patrol agents now take immediate action against illegal aliens posing as street performers. The study reported that, according to a police official, shoplifting incidents dropped to about one per year, and the park near the river was again a safe recreational area for adults and children. As the strategy has unfolded, there has been an accumulation of knowledge and experience concerning (1) factors that can impede INS’ implementation of the strategy, (2) the importance of communications between INS and border communities, and (3) aliens’ determination to cross the border. Experience has indicated to INS that it cannot implement its border strategy at the pace that it originally anticipated. In March 1997, INS submitted a 5-year staffing plan to Congress covering fiscal years 1996 through 2000. According to the plan, INS was to bolster border control efforts along the Northern U.S. border and Gulf Coast beginning in fiscal year 1998 and continuing into fiscal year 2000. INS had planned to deploy between 245 and about 400 agents to sectors in these areas, but during these 3 years, INS added 47 agents to the Northern border and none to the Gulf Coast sectors. INS officials identified various factors as having impeded their ability to implement the strategy faster. According to a sector chief, a shortage of support personnel has required him to use Border Patrol agents for jobs that should be performed by support staff. In this sector, agents who would otherwise be patrolling the border are used instead to monitor remote video surveillance cameras because the sector does not have enough Law Enforcement Communications Assistants. According to a Western Region Border Patrol official, this is a problem in many sectors. Agents are doing work, such as building fences, monitoring sensors, and performing dispatching duties, that could be done by support personnel. This has detracted from INS’ goal of increasing the amount of time Border Patrol agents spend on their core activity of patrolling the border. According to INS budget officials, INS has requested funds for additional support personnel, but these positions have not been fully funded. Border Patrol officials also identified a lack of technology, fencing, and lights as having impeded their ability to implement the strategy faster. According to officials in one sector, additional remote video surveillance systems, lighting, and fencing would allow them to monitor a greater portion of their border area than is now possible. According to a Border Patrol headquarters official, the deployment of technology, fencing, and lights has been slower than anticipated because it has taken longer than planned to prepare environmental impact assessments and coordinate with other federal, state, and local agencies. Also, the Border Patrol has had to build new stations to house the increased number of agents. According to a Border Patrol official, construction funding for fencing has been limited by the competing need to build new stations. After several instances in which border communities expressed dismay at having been caught unaware by the sudden increase in illegal alien traffic, INS recognized the need to establish channels of communication to discuss the potential implications of its strategy with local communities. Officials from border communities, such as Nogales and Douglas, AZ, told us that they were unaware that INS even had a strategy until they saw a dramatic increase in illegal alien traffic in their towns. A Douglas city official told us he first became aware that something was going on when the Border Patrol began building a fence in the downtown area. Local officials became increasingly concerned when they learned that the Border Patrol was transporting aliens apprehended in Nogales to Douglas and returning them to Mexico through the port of entry there. Many would then try to renter at Douglas. According to local officials, had they known about the strategy and its potential impact, they might have been able to do some things to mitigate its impact on the community. For example, a Douglas police official said that the department could have rearranged shift schedules to have more police on duty to respond to the increase in prowler calls and provide more support to Border Patrol agents needing assistance. He also said that the city could have strengthened city taxicab ordinances to prevent alien smugglers from establishing “taxi companies” to shuttle illegal aliens to Phoenix and other locations. They said the number of taxicabs in Douglas increased from 2 or 3 cabs to between 20 and 40 taxi companies almost overnight. Pima County, AZ, officials told us that the Border Patrol should have put local jurisdictions “on notice” regarding their strategy. They said this would have helped local officials respond to constituent questions and concerns. It would also have allowed time for local governments to try to obtain additional funding to deal with the expected influx or, for example, to add more law enforcement. The officials added that if they had been forewarned, they might have requested the Border Patrol to deploy additional agents to certain areas to mitigate the destruction of the pristine areas in the wildlife refuge. INS has recognized the need to increase communications with the public regarding the strategy and its potential implications. According to INS’ fiscal year 2000 Annual Performance Plan, one of INS’ major goals was to improve INS’ involvement with communities in the development and implementation of INS operations. To improve communications with the community, the Tucson sector appointed a full-time community relations officer in November 2000. The sector also has a community advisory group made up of local citizens in each of three cities, Nogales, Douglas, and Naco, AZ. Members of the Douglas group told us they find these meetings helpful and that the Border Patrol has been responsive to their concerns. Since 1999, the sector has had a toll-free number to improve communications with local residents. Agents assigned to the sector’s “ranch patrol” monitor the private ranchland surrounding the city of Douglas, where many aliens now cross. According to Border Patrol officials, the Del Rio sector, and in particular the Eagle Pass, TX, area may be the next location to experience a significant increase in illegal alien traffic. They believe this because, like the other areas that have experienced significant increases in illegal alien traffic, it has the infrastructure of roads leading to and from the border area that alien smugglers need to transport the illegal aliens. The Del Rio sector chief believes the sector is better prepared than were other sectors, such as Tucson and El Centro, when they experienced significant increases in alien traffic. In February 2001, the Del Rio sector had slightly over 1,000 agents, of whom about 300 were assigned to the Eagle Pass station. The sector recently received airboats to patrol the Rio Grande River and additional lights and remote video surveillance systems to better monitor the border. The sector chief told us that he has been conducting community outreach efforts for several years to inform the community about INS’ strategy. He said the sector has a Rancher Liaison Program that informs and educates the community about Border Patrol activities and operations. This program has, according to the chief, opened channels of communication between the community and the Border Patrol that have helped the sector gain access to private lands. He believes working with ranchers and the public helps reduce the potential for violence between the citizens and illegal aliens as well as the negative publicity that can befall a community because of significant increases in illegal immigration. According to the Police Chief of Eagle Pass, the Del Rio sector began its outreach efforts several years ago. For example, after Operation Rio Grande began in the summer of 1997, Border Patrol sector officials gave a briefing to the Eagle Pass City Council on INS’ Southwest border strategy. They explained that increased enforcement in locations south of Eagle Pass and the ongoing enforcement in El Paso to the north might increase the illegal alien traffic in Eagle Pass. The police chief stated that since then, the Del Rio Border Patrol sector chief has given numerous presentations before community organizations, such as the local Rotary Club. He stated that such outreach efforts have kept the lines of communication open, and the city has not experienced any instances of citizens detaining illegal aliens as has occurred in other locations along the border. The strategy assumed that as the urban areas were controlled, the traffic would shift to more remote areas where the Border Patrol would be able to more easily detect and apprehend aliens entering illegally. The strategy also assumed that natural barriers such as rivers, mountains, and the harsh terrain of the desert would act as deterrents to illegal entry. However, INS officials told us that as the traffic shifted, they did not anticipate the sizable number that would still attempt to enter through these harsh environments. A study of migrant deaths along the Southwest border concluded that while migrants have always faced danger crossing the border and many died before INS began its strategy, the strategy has resulted in an increase in deaths from exposure to either heat or cold. Border Patrol data indicated that 1,013 migrants died trying to cross the Southwest border illegally between October 1997 and June 1, 2001 (see table 2). Nearly 60 percent died from either heat exposure or drowning. To reduce the number of illegal aliens who die or are injured trying to cross the border illegally, INS began a Border Safety Initiative in June 1998. The initiative focuses on (1) educating those who may be contemplating crossing illegally on the dangers of crossing and (2) searching for and rescuing those who may become abandoned or lost. Working in conjunction with the Mexican government, INS has produced public service announcements that are shown on television in Mexico to warn people of the dangers of crossing—for example, exposure to heat and cold, dehydration, snakes, and bandits that rob and assault those who cross in remote areas. Border Patrol sectors show detained aliens a similar video announcement. Signs have been posted on both sides of border fences in various locations that also warn about the dangers of crossing. Toll-free numbers in both Mexico and the United States can be used to report migrants in trouble. The Border Patrol has created special search-and-rescue units in areas where it is becoming more dangerous to cross. For example, the El Centro sector has a desert rescue team whose members have been trained in emergency medical procedures or first aid. The team uses a desert rescue ambulance equipped with water and lifesaving equipment. To deter crossings, El Centro agents are positioned, and high-powered lights have been installed, at dangerous crossings along the All American Canal, which runs along the border. The sector’s air unit flies along the canal and in desert areas to search for those who may be in danger. According to the Border Patrol’s Border Safety Initiative coordinator, most of the border safety-related expenses, such as agent time and acquisition and maintenance of equipment, have been funded out of Border Patrol general operations funds. Therefore, detailed cost data for all safety- related costs were not readily available. According to the coordinator, in fiscal years 1998 through 2001, INS will have spent about $1 million primarily for public service announcements, signs, mapping potential danger areas, and liaison with Mexican counterparts. For fiscal year 2002, INS’ proposed border safety budget is $1.5 million. As shown in figure 6, there was a significant increase in Border Patrol rescues of migrants from 1999 to 2000. The Border Patrol has also given search-and-rescue training to Mexican law enforcement officials. In June 2001, a joint U.S.-Mexico safety conference was held in San Antonio, TX. Another aspect of the initiative is to identify and prosecute alien smugglers who use dangerous smuggling practices. The Border Patrol has established procedures for identifying such smugglers to facilitate coordinated efforts to target them for arrest and prosecution. According to INS’ year-end review of its fiscal year 2000 Annual Performance Plan, apprehending and prosecuting the smugglers will require full cooperation from Mexico. The Border Patrol has incorporated the issue of border safety into its overall strategy. In November 2000, the Border Patrol issued a Border Safety addendum to the strategy that emphasizes the need to incorporate safety issues into any future operations. On June 22, 2001, the United States and Mexico announced plans to enhance border safety in the wake of the death of 14 undocumented aliens in the Arizona desert in May 2001. The plans call for the United States and Mexico to strengthen the public safety campaign to alert potential migrants of the dangers of crossing the border in high-risk areas; reinforce plans for the protection and search and rescue of migrants, including increased aerial surveillance of the U.S. side and increased presence of Mexican law enforcement on the Mexican side; and implement a cooperative, comprehensive, and aggressive plan to combat and dismantle alien smuggling organizations. INS has spent 7 years implementing its Southwest border strategy, but it may take INS up to a decade longer to fully implement the strategy. This assumes that INS obtains the level of staff, technology, equipment, and fencing it believes it needs to control the Southwest border. Although illegal alien apprehensions have shifted, there is no clear indication that overall illegal entry into the United States along the Southwest border has declined. INS’ current efforts to measure the effectiveness of its border control efforts could be enhanced by analyzing data in its IDENT system. These data offer INS an opportunity to develop additional performance indicators that could be incorporated into its Annual Performance Plan review process and could help INS assess whether its border control efforts are associated with an overall reduction in the flow of illegal aliens across the border. Borderwide analysis of the IDENT data could be used to address several important questions related to illegal entry. The strategy’s impact on local communities has been affected by the timing of INS’ infusion of agent and other resources intended to protect the local community from a surge in illegal alien traffic; what routes the illegal aliens have used in crossing the border; and INS’ involvement with the community. INS has learned the importance of outreach efforts in attempting to mitigate the potential negative effects the strategy can cause a community and the harm that can befall illegal aliens who risk injury and death to cross the border. To better gauge the effects of its border control efforts, we recommend that the INS Commissioner develop specific performance indicators using the IDENT data and incorporate these indicators into INS’ Annual Performance Plan. We requested comments on a draft of this report from the Attorney General. In a letter dated July 24, 2001, which we have reprinted in appendix II, INS’ Executive Associate Commissioner for Field Operations concurred with our recommendation and said that INS will begin developing specific performance indicators using IDENT data. However, he also stated that “INS will continue to evaluate the use of IDENT data for analyzing shifts in illegal alien traffic,” and a “Congressional moratorium on the deployment of new IDENT sites, as well as efforts to integrate IDENT [with the automated fingerprint system used by the Federal Bureau of Investigation], have an operational impact that delays comprehensive data collection along the southwest border.” We believe that IDENT, which has been incrementally deployed to all Border Patrol stations along the Southwest border since 1995, already contains data that could be used to determine the number of aliens Border Patrol agents have arrested between ports of entry, how many times they have been arrested trying to enter illegally, and what shifts in illegal entry attempts between ports of entry have occurred over time along the Southwest border. Therefore, while future improvements to the collection of fingerprint data will be useful, we believe that the IDENT data currently available puts INS in the position to develop the types of performance measures discussed in our report and to use the measures to gain a better understanding of the results of its enforcement efforts. INS’ Executive Associate Commissioner also stated that the long-term resource requirements we refer to in our report are based on preliminary information and are subject to change. He indicated that further discussions among INS, the Department of Justice, and the administration are needed to finalize the requirements. We have added wording to our report to clarify that INS’ estimates of its long-term resource requirements are preliminary and subject to change. We are sending copies of this report to the Attorney General; Commissioner of the Immigration and Naturalization Service; Director, Office of Management and Budget; and other interested parties. Copies of this report will also be made available to others upon request. If you or your staff have any questions concerning this report, please contact me or Evi Rezmovic on (202) 512-8777. Michael P. Dino, James R. Bancroft, and Brian J. Lipman made key contributions to this report. | To deter illegal entry between the nation's ports of entry, the Immigration and Naturalization Service (INS) developed its Southwest Border Strategy. INS has spent seven years implementing the border strategy, but it may take INS up to a decade longer to fully implement the strategy. This assumes that INS obtains the level of staff, technology, equipment, and fencing it believes it needs to control the Southwest border. Although illegal alien apprehensions have shifted, there is no clear indication that overall illegal entry into the United States along the Southwestern border has declined. INS' current efforts to measure the effectiveness of its border control efforts could be enhanced by analyzing the data in its automated biometric identification system (IDENT). These data offer INS an opportunity to develop additional performance indicators that could be incorporated into its Annual Performance Plan review process and could help INS assess whether its border control efforts are associated with an overall reduction in the flow of illegal aliens across the border. Borderwide analysis of the IDENT data could be used to address several important questions related to illegal entry. The strategy's impact on local communities has been affected by the timing of INS, infusion of agent and other resources intended to protect the local community from a surge in illegal alien traffic; what routes the illegal aliens have used in crossing the border; and INS' involvement with the community. INS has learned the importance of outreach efforts in attempting to mitigate the potential negative effects the strategy can cause a community and the harm that can befall illegal aliens who risk injury and death to cross the border. |
The Clinger-Cohen Act of 1996 requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these IT investments. Further, the act places responsibility for managing investments with the heads of agencies and establishes CIOs to advise and assist agency heads in carrying out this responsibility. OMB established the Management Watch List in 2003 to help carry out its oversight role. The Management Watch List included mission-critical projects that needed to improve performance measures, project management, IT security, or overall justification for inclusion in the President’s budget submission. Further, in August 2005, OMB established a High-Risk List, which consisted of projects identified by federal agencies, with OMB’s input, as requiring special attention from oversight authorities and the highest levels of agency management. Between 2005 and 2009, OMB described its efforts to monitor and manage risky federal IT investments in the annual budget submission. Over the past several years, we have reported and testified on OMB’s initiatives to highlight troubled IT projects, justify investments, and use project management tools. For instance, in 2006 we recommended that OMB develop a single aggregated list of high-risk projects and their deficiencies and use that list to report to Congress on progress made in correcting high-risk problems. As a result, OMB started publicly releasing aggregate data on its Management Watch List and disclosing the projects’ deficiencies. Moreover, between 2007 and 2009, the President’s budget submission included an overview of investment performance over several budget years, including the number of federal IT projects in need of management attention. Such information helped Congress stay better With informed of high-risk projects and make related funding decisions.the advent of its IT Dashboard in 2009, OMB discontinued this type of reporting in the fiscal year 2010 budget submission. to allow OMB; other oversight bodies, including Congress; and the general public to hold the government agencies accountable for progress and results. OMB reported on plans and implementation progress for this management tool in the “Analytical Perspectives” section of the President’s budget submissions for fiscal years 2012 and 2013, including planned updates to the Dashboard during 2012 to support closer executive oversight and intervention to prevent schedule delays, cost overruns, and failures in delivering key functionality needed by federal programs. For example, it reported using the Dashboard to identify investments for TechStat reviews. The Dashboard visually presents performance ratings for agencies overall and for individual investments using metrics that OMB has defined—cost, schedule, and CIO evaluation. The website also provides the capability to download certain data. Figure 1 is an example of an agency’s (OPM) portfolio page as recently depicted on the Dashboard. The Dashboard’s data spans the period from June 2009 to the present, and is based, in part, on each agency’s exhibit 53 and exhibit 300 to OMB, as well as on agency assessments and submissionssupporting information on each investment. Over the life of the Dashboard, OMB has issued guidance to agencies on, among other things, what data to report, how those data need to be structured and formatted for upload to the Dashboard, and procedures for using the Dashboard’s submission tools. For instance, OMB instructed agencies to update and submit investment cost and schedule data monthly. OMB has made various changes to the organization, available data, and features of the Dashboard over time, including improvements to Dashboard calculations to incorporate the variance of “in progress” milestones rather than just “completed” milestones; web pages containing data on historical ratings and rebaselines of eliminated and downgraded investments; added data on awarded contracts, with links to USAspending.gov; release of IT Dashboard source code and documentation to an open source hosting provider; enhancements to baseline history, which give users the ability to see field-by-field changes for each rebaseline; a mechanism for OMB analysts to provide feedback to agencies on mobile-friendly formatting of Dashboard displays. Once OMB has received agency-reported investment data, it converts these into investment performance ratings for display on the dashboard according to calculations and protocols described on its website. OMB assigns cost and schedule performance ratings by using data submitted by agencies to calculate variances between the planned cost or schedule targets and the actual or projected cost or schedule values. OMB converts these variances to percentages, and assigns the ratings to be presented on the Dashboard within three ranges, red, yellow, and green, as shown in table 1. Although the thresholds for assigning cost and schedule variance ratings has remained constant over the life of the Dashboard, the cost and schedule data agencies are required to submit have changed in several ways, as have the variance calculations. For example, in response to our recommendations (further discussed in the next section), OMB changed how the Dashboard calculates the cost and schedule ratings in July 2010, to include “in progress” milestones rather than just “completed” ones for more accurate reflection of current investment status. We have previously reported that OMB has taken significant steps to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, and by improving the accuracy of investment ratings. We also found issues with the accuracy and data reliability of cost and schedule data, and recommended steps that OMB should take to improve these data. In July 2010, we reportedDashboard were not always accurate for the investments we reviewed, because these ratings did not take into consideration current performance. As a result, the ratings were based on outdated information. We recommended that OMB report on its planned changes to the Dashboard to improve the accuracy of performance information and provide guidance to agencies to standardize milestone reporting. OMB agreed with our recommendations and, as a result, updated the Dashboard’s cost and schedule calculations to include both ongoing and that the cost and schedule ratings on OMB’s completed activities. Similarly, in March 2011, we reportedhad initiated several efforts to increase the Dashboard’s value as an oversight tool, and had used its data to improve federal IT management. We also reported, however, that agency practices and the Dashboard’s calculations contributed to inaccuracies in the reported investment performance data. For instance, we found missing data submissions or erroneous data at each of the five agencies we reviewed, along with instances of inconsistent program baselines and unreliable source data. As a result, we recommended that the agencies take steps to improve the accuracy and reliability of their Dashboard information, and that OMB improve how it rates investments relative to current performance and schedule variance. Most agencies generally concurred with our recommendations; OMB agreed with our recommendation for improving ratings for schedule variance. It disagreed with our recommendation to improve how it reflects current performance in cost and schedule ratings, but more recently made changes to Dashboard calculations to address this while also noting challenges in comprehensively evaluating cost and schedule data for these investments. More recently, in November 2011, we reported investment cost and schedule ratings had improved since our July 2010 report because OMB had refined the Dashboard’s cost and schedule calculations. Most of the ratings for the eight investments we reviewed were accurate, although we noted that more could be done to inform oversight and decision making by emphasizing recent performance in the ratings. We recommended that the General Services Administration comply with OMB’s guidance for updating its ratings when new information becomes available (including when investments are rebaselined) and the agency concurred. Since we previously recommended that OMB improve how it rates investments, we did not make any further recommendations. GAO-12-210. investment into a color for depiction on the Dashboard. An OMB staff member from the Office of E-Government and Information Technology noted that the CIO rating should be a current assessment of future performance based on historical results and is the only Dashboard performance indicator that has been defined and produced the same way since the Dashboard’s inception. According to OMB’s instructions, a CIO rating should reflect the level of risk facing an investment on a scale from 1 (high risk) to 5 (low risk) relative to that investment’s ability to accomplish its goals. Each agency CIO is to assess their IT investments against a set of six preestablished evaluation factors identified by OMB (shown in table 2) and then assign a rating of 1 to 5 based on his or her best judgment of the level of risk facing the investment. According to an OMB staff member, agency CIOs are responsible for determining appropriate thresholds for the risk levels and for applying them to investments when assigning CIO ratings. OMB recommends that CIOs consult with appropriate stakeholders in making their evaluation, including Chief Acquisition Officers, program managers, and other interested parties. Ultimately, CIO ratings are assigned colors for presentation on the Dashboard, according to the five- point rating scale, as illustrated in table 3. OMB has made the CIO’s evaluation and rating a key component of its larger IT Reform Initiative and 25 Point Plan. In its plan, OMB reported that it used agencies’ CIO ratings to select investments for the TechStat review sessions it conducted between 2010 and 2011. These sessions are data-driven assessments of IT investments by agency leaders that are intended to result in concrete action to improve performance. OMB reported that the TechStats it conducted on selected investments resulted in approximately $3 billion in reduced costs. Building on the results of those sessions, the plan articulates a strategy for strengthening IT governance, in part, through the adoption of the TechStat model by federal agencies. In conducting TechStats, agencies are to rely, in part, on CIO ratings from the IT Dashboard. The TechStat Toolkit, developed by OMB and a task force of agency leads, provides sample questions regarding an investment’s CIO rating and associated risks for use in TechStat sessions. Furthermore, OMB issued guidance in August 2011other things, that agency CIOs shall be held accountable for the performance of IT program managers based on their governance process and the data reported on the IT Dashboard, which includes the CIO that stated, among rating. According to OMB, the addition of CIO names and photos on Dashboard investments is intended to highlight this accountability and link it to the Dashboard’s reporting on investment performance. Figure 2 illustrates the CIO rating information presented on the Dashboard for an example IT investment. As of March 2012, CIO ratings for most investments listed on the Dashboard for the six agencies we reviewed indicated either low risk or moderately low risk (223 out of 313 investments across all the selected agencies). High risk or moderately high risk ratings were assigned to fewer investments (12 out of 313 investments across all the selected agencies). Figure 3 presents the total number of IT investments rated on the Dashboard for each of the selected agencies according to their risk levels, as of March 2012, and illustrates the predominance of low risk investments for the agencies in our review. The figure also reports agencies’ budgets for their major IT investments for fiscal year 2012, as presented on the Dashboard. Historically, over the life of the Dashboard from June 2009 to March 2012, low or moderately low risk ratings accounted for at least 66 percent of all ratings at five of the six agencies (the exception is DHS with 51 percent). Medium risk ratings accounted for between 0 to 38 percent of all reported ratings across agencies during this period. The maximum percentage of ratings in the high risk or moderately high risk categories for any agency during this 34-month period was 12 percent, with two agencies—DOD DOD stated in written and NSF—reporting no high risk investments.comments that this was because they did not deem any of their investments to be high risk. (DOD’s investment risks are further discussed in the next section.) An NSF official from the Division of Information Systems stated that there were no high risk investments because most of their investments were in the operations and maintenance phase. Table 4 presents the average composition of ratings for each agency during the reporting period of June 2009 to March 2012. Appendix III depicts each agency’s CIO ratings by risk level on a monthly basis during the reporting period. Overall, the CIO rating remained constant for 147 of 313 investments that were active as of March 2012 (about 47 percent of the investments we reviewed). These investments were rated at the same risk level during every rating period (see fig. 4). Four of the six agencies did not change the CIO rating for a majority of their investments (excluding any investments that were downgraded or eliminated) during the time frame we examined. In contrast, the other two agencies—OPM and HHS—changed the CIO rating for more than 70 percent of their investments at least once between the investment’s initial rating and the rating reported as of March 2012. Table 5 lists the number of each agency’s investments whose ratings were constant and changed over time. Agencies offered several reasons for why many investments had no changes in their CIO ratings during their entire time on the Dashboard. Five of the six selected agencies indicated that many investments were in a steady-state or operations and maintenance phase with no new development. One agency reported that their investments’ CIO ratings remained constant because the investments consistently met all requirements and deadlines and were using project management best practices. The agencies we reviewed showed mixed results in reducing the number For investments of higher risk investments during the rating period.whose rating changed at least once during the period, 40 percent (67 investments) received a lower risk rating in March 2012 than they received initially, 41 percent of investments (68 investments) received a higher risk rating, and the remaining 19 percent (31 investments) received the same rating in March 2012 as they had initially received, despite whatever interim changes may have occurred (i.e., there was no “net” change to their reported risk levels). (See fig. 5.) Two agencies—DHS and OPM—reported more investments with reduced risk in March 2012, as compared with initial ratings. The other four agencies reported more investments with increased risk. Table 6 presents net changes in risk levels at each of the selected agencies (among investments that were not downgraded or eliminated). Appendix III graphically summarizes these data for all six agencies. Agencies most commonly cited additional oversight or program reviews as factors that contributed to decreased risk levels. Specifically, agencies commented that the CIO ratings and Dashboard reporting had spurred improved program management and risk mitigation. For example, one agency’s officials commented that the CIO now closely monitors the monthly performance and risk data generated by their investments, and that the additional oversight has brought about strengthened processes and more focused attention to issues. In contrast, several agencies cited generally poor risk management at the investment level, the introduction of new investment/programs risks, as well as instances of poor project management as factors contributing to increased risk for investments. For example, one agency responded that internal review findings revealed new risks that caused an investment’s risk level to increase. Another agency’s officials reported that various technical issues caused one of their investments to fall behind schedule, thus increasing risk. Both OMB and several agencies suggested caution in interpreting changing risk levels for investments. They noted that an increase in an investment’s risk level can sometimes indicate better management by the program or CIO because previously unidentified risks have been assessed and included in the CIO evaluation. Conversely, a decrease in an investment’s risk level may not indicate improved management if the data and analysis on which the CIO rating are based is incomplete, inconsistent, or outdated. Further analysis of the characteristics and causes of Dashboard’s CIO ratings, and reporting on the patterns of risk within and among agencies, could provide Congress and the public with additional perspectives on federal IT investment risk over time. However, for the past four budget submissions, OMB has not summarized the extent of risk represented by major federal IT investments in the analysis it prepares annually for the President’s budget submission, as it did prior to the fiscal year 2010 submission. As a result, OMB is missing an opportunity to integrate such risk assessments into its evaluation of major capital investments in reporting to Congress. OMB has provided agencies with instructions for assigning CIO ratings for the major IT investments reported on the Dashboard. Specifically, OMB’s instructions state that agency CIOs should rate each investment based on his/her best judgment and should include input from stakeholders, such as Chief Acquisition Officers, program managers, and others; update the rating as soon as new information becomes available that might affect the assessment of a given investment; and utilize OMB’s investment rating factors, including: risk management, requirements management, contractor oversight, historical performance, and human capital, as well as any other factors deemed relevant by the CIO. Despite differences in the specific inputs and processes used, agencies generally followed OMB’s instructions for assigning CIO ratings. However, DOD’s ratings reflected additional considerations beyond OMB’s instructions and did not reflect available information about significant risks for certain investments. The sections that follow describe how each agency addressed OMB’s instructions. Include input from stakeholders. Each of the six agencies we reviewed relied on stakeholder input, at least in part, when assigning CIO ratings. Agencies also cited a variety of review boards, data from program and financial systems, and other investment assessments as inputs to the rating. Table 7 describes the data and processes that agencies reported using when they derived their CIO ratings. Update CIO ratings. All six agencies established guidelines for periodically reviewing and updating their CIO ratings. Specifically, HHS, NSF, DOI, and OPM reported that they update CIO ratings on a monthly basis. DOD has adopted a quarterly update cycle, although an official noted that the actual process of collecting information and evaluating investments for the ratings takes slightly longer than 3 months. DHS officials with the Office of the CIO stated that the frequency of its updates varies based on the risk level of an investment’s previous rating: investments with a previous CIO rating of green are to be reviewed semiannually; yellow investments are to be reviewed quarterly; and red investments are to be reviewed monthly. Utilize OMB’s investment rating factors. Most of the selected agencies use OMB’s investment rating factors when evaluating their investments. Only one agency (HHS) does not use all of them. Specifically, an HHS official from the Office of the CIO told us that human capital issues are not explicitly covered in their CIO rating criteria because investment owners are to provide adequate IT human capital, and that these owners will reflect any issues that arise when providing input for the CIO rating. Among the agencies we reviewed, DOD was unique in that its ratings reflected additional considerations beyond OMB’s instructions. For example, briefing slides prepared for DOD’s 2011 CIO rating exercise identified the need to “balance” CIO ratings, and advised that yellow or red ratings could lead to an OMB review. In addition, DOD officials explained that the department rated investments green (or low risk) if the risk of the investment not meeting its performance goals is low; yellow (or medium risk) if the investment is facing difficulty; and red (high risk) only if the department planned to restructure or cancel the investment, or had already done so. DOD officials further stated that their CIO ratings provide a measured assessment of how DOD believes an investment will perform in the future. Although the CIO ratings submitted by DOD to the Dashboard are consistent with their ratings approach, they do not reflect other available information about the risk of these investments. As we previously noted, none of DOD’s investments that were active in March 2012 were rated as high risk, and approximately 85 percent were rated as either low risk or moderately low risk throughout their time on the Dashboard. However, these ratings did not always reflect significant schedule delays, cost increases, and other weaknesses identified for certain investments in our recent reviews, or problems with those investments identified in a recent report by the DOD Inspector General. Based on the department’s long-standing difficulties with such programs, we designated DOD business systems modernization as a high-risk area in 1995 and it remains a high-risk area today. More recently, we reported weaknesses in several of the department’s business system investments.effectively ensured that these systems would deliver capabilities on time and within budget; that acquisition delays required extended funding for duplicative legacy systems; that delays and cost overruns were likely to erode the cost savings these systems were to provide; and that, ultimately, DOD’s management of these investments was putting the department’s transformation of business operations at risk. Specifically, we reported that the department had not Although the following selected examples of DOD investments experienced significant performance problems and were included with those considered to be high-risk business system investments in our recent reviews of those systems, they were all rated low risk or moderately low risk by the DOD CIO. Air Force’s Defense Enterprise Accounting and Management System (DEAMS): DEAMS is the Air Force’s target accounting system designed to provide accurate, reliable, and timely financial information. In early 2012, GAO reported that DEAMS faced a 2-year deployment delay, an estimated cost increase of about $500 million for an original life-cycle cost estimate of $1.1 billion (an increase of approximately 45 percent), and that assessments by DOD users had identified operational problems with the system, such as data accuracy issues, an inability to generate auditable financial reports, and the need for manual workarounds.Inspector General reported that the DEAMS’ schedule delays were likely to diminish the cost savings it was to provide, and would jeopardize the department’s goals for attaining an auditable financial statement. DOD’s CIO rated DEAMS low risk or moderately low risk from July 2009 through March 2012. In July 2012, the DOD Army’s General Fund Enterprise Business System (GFEBS): GFEBS is an Army financial management system intended to improve the timeliness and reliability of financial information and to support the department’s auditability goals. In early 2012, we reported that GFEBS faced a 10-month implementation delay, and that DOD users reported operational problems, including deficiencies in data accuracy and an inability to generate auditable financial reports. These concerns were reiterated by the DOD Inspector General in July 2012. DOD’s CIO rated GFEBS as moderately low risk from July 2009 through March 2012. Army’s Global Combat Support System-Army (GCSS-Army): GCSS-Army is intended to improve the Army’s supply chain management capabilities and provide accurate equipment readiness status reports, among other things. In March 2012, we reported that GCSS-Army was experiencing a cost overrun of approximately $300 million on an original life-cycle cost estimate of $3.9 billion (an increase of approximately 8 percent) and a deployment delay of approximately 2 years. DOD rated GCSS-Army as low or moderately low risk from July 2009 through March 2012. Explanations submitted by DOD with the CIO ratings for these investments did not provide meaningful insight for why they were rated at the lowest risk levels in the face of known issues. DOD officials told us that they rated these investments as low risk because, in their view, the cost and schedule variances listed above did not constitute significant risks. Officials explained that: (1) the cost variances were not that large compared to DOD’s overall size and large amount of IT spending; (2) the schedule variance needed to be understood in the context that the average DOD large-scale IT program takes 7 years (or 84 months) to implement; and (3) that each of those programs had risk mitigation plans in place. However, the first two reasons are inconsistent with DOD’s own which recommends that risks be assessed risk management guidance,against the program’s own cost and schedule estimates, not other department investments. In addition, completing risk mitigation plans does not necessarily lower investment risk. DOD’s guidance calls for implementing the mitigation plan and then reassessing resulting changes to the risk. Even if the department adopts these elements of its own guidance, the CIO’s evaluation will be incomplete unless it also reflects the assessments of investment performance and risks identified by us and others. Until the department does so, CIO ratings for DOD’s Dashboard investments may not be sufficiently accurate or useful for its TechStat sessions or OMB’s management and oversight. Selected agencies identified various benefits associated with performing CIO ratings and Dashboard reporting in general. Almost all of the agencies (five of six) reported the following three benefits. Increased quality of investment performance data. For example, one agency also reported that the Dashboard has made information about investments more understandable. Greater transparency and visibility for CIOs and their staff into investment- and program-level performance data. One agency reported that its CIO was better able to conduct reviews with actual investment numbers, as opposed to self-reported data presented by the investment’s program managers. Agencies could also compare their investments’ ratings to those of other agencies and departments. Increased focus on project management practices. Two agencies reported improved investment performance as a direct result of their Dashboard rating and reporting activities; another stated that Dashboard reporting supported and reinforced their existing IT governance, capital planning, and program management processes. Some of these benefits were interrelated. Several agencies viewed the improved data quality as a by-product of greater scrutiny brought about by having to report such data to the Dashboard on a regular basis. One agency response noted that their program managers were surprised to see the extent to which investment data were visible to the public, and that this visibility motivated their staff to provide accurate and timely data (which has improved data quality). Another agency noted that the visibility of the IT Dashboard has increased awareness among investment and project managers about the need to improve the planning of project activities and the definition of operational performance metrics (which support program management). Nevertheless, agencies also identified challenges associated with producing and reporting CIO ratings. First, three agencies reported a challenge associated with the time and effort required to gather, validate, and gain internal approval for CIO ratings and other data reported to the Dashboard. For example, one agency reported that, due to the number of organizations involved and the number of investments being evaluated, it generally takes 90 to 120 days to develop and update its CIO ratings. The agency further reported that this effort was separate from (and in addition to) time it already spends on its own internal processes for managing and overseeing acquisition programs. Second, four of the six agencies identified challenges with the number of changes OMB has made to the Dashboard, as well as with the timeliness and clarity of OMB’s communication regarding those changes. For example, officials at one agency commented that the frequency of changes has actually hindered their efforts to improve data quality, since errors sometimes resulted when it adapted to changes required by OMB. Officials at another agency stated that OMB allowed insufficient time for agencies to test their systems’ interfaces with the Dashboard when changes were made, which they said resulted in data errors and challenges for staff. These officials also noted that OMB’s guidance for agency submissions has, at times, not matched the technical data schemas implemented by OMB, impeding agencies’ efforts to successfully upload their data. An OMB staff member commented that their office releases changes to the Dashboard as early in the fiscal year as possible to give agencies time to adjust and that OMB announces planned changes to agencies before they are implemented via the Dashboard’s interagency web portal. OMB has recently held meetings with agency officials to discuss these issues and determine ways to better communicate going forward. Finally, one agency responded that while monthly updates to the Dashboard have increased investment and project managers’ attention to the performance of their investments and projects, this regular scrutiny could encourage investment and project managers to “perform to the test” rather than concentrate on effective investment and project management. However, based on the interrelationships of the benefits of CIO ratings identified by some agencies, the process of generating and reporting CIO ratings does not have to be just a grading exercise. As previously noted, the benefit of improved investment performance data for the CIO’s investment evaluation can lead to more effective management, which could, in turn, improve investment performance. Executives and staff who can envision these results from the Dashboard’s CIO evaluations may be less likely to view the additional time and effort required to generate the CIO ratings as a challenge, but as an opportunity for more efficient and effective management. Since its inception in 2009, the Federal IT Dashboard has increased the transparency of the performance of major federal IT investments. Its CIO ratings, in particular, have improved visibility into changes in the risk levels of agencies’ investments over time. Determining whether such changes represent improvements or deficiencies in management and oversight can be difficult without additional information on investment performance and the rating process, but analyzing and reporting the ratings for investments and agencies over time for the President’s budget submission could help OMB ensure that risk is accurately assessed and that patterns of risk deserving of special management attention are identified. DOD demonstrated one such pattern of interest in its CIO ratings. During the 34-month life of the Dashboard, none of the 87 investments that were active as of March 2012 were rated high risk or moderately high risk, and approximately 85 percent of ratings were low risk or moderately low risk. Although DOD implemented OMB’s broad instructions for producing CIO ratings, it also considered how the ratings might increase the likelihood of an OMB review of an investment and minimized the effects of significant schedule delays and cost increases, which were identified in our reviews and those of DOD’s Inspector General. As a result, DOD is masking significant investment risks, has not employed its own risk management guidance, and has not delivered the transparency intended by the Dashboard. By incorporating the results of external reviews into its evaluations, DOD can further improve the quality of the information on which investment risk ratings are based. Beyond the transparency they promote, CIO ratings present an opportunity to improve the data and processes agencies use to assess investment risk. Some agencies have already experienced collateral benefits and management results from their risk evaluations. Continuing focus from OMB and agencies on how to accurately portray and derive value from the ratings and the associated processes could enable agencies to experience such benefits. To ensure that OMB’s preparation of the President’s budget submission accurately reflects the risks associated with all major IT investments, we are recommending that the Federal CIO analyze agency trends reflected in Dashboard CIO ratings, and present the results of this analysis with the President’s annual budget submission. To ensure that DOD’s CIO evaluations of investment risk for its major IT Dashboard investments reflect all available performance assessments and are consistent with the department’s own guidance for managing risk, we are recommending that the Secretary of Defense direct the department’s CIO to reassess the department’s considerations for assigning CIO risk levels for Dashboard investments, including assessments of investment performance and risk from outside the programs, and apply the appropriate elements of the department’s risk management guidance to OMB’s evaluation factors in determining CIO ratings. We provided a draft of our report to the six agencies selected for our review and to OMB. In oral comments, staff from OMB’s Office of E- Government & Information Technology stated that OMB concurred with our recommendation that the Federal CIO analyze agency trends reflected in Dashboard CIO ratings and present the results of this analysis with the President’s annual budget submission. OMB staff also provided technical comments, which we incorporated as appropriate. In a written response, DOD’s Deputy Chief Information Officer for Information Enterprise agreed with our recommendation that the department’s CIO reassess considerations for assigning CIO risk levels for Dashboard investments, and committed to updating the department’s CIO ratings process to better report risk and improve the timeliness and transparency of reporting. DOD’s written response is reprinted in Appendix IV. Officials at DOI provided technical comments, which we incorporated as appropriate. The remaining agencies had no comment on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees; the Secretaries of Defense, Interior, Homeland Security, Health and Human Services, the Director of the National Science Foundation, the Director of the Office of Personnel Management, the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact David A. Powner at (202) 512-9286 or by e-mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) characterize the Chief Information Officer (CIO) ratings for selected federal agencies’ information technology (IT) investments as reported over time on the federal IT Dashboard; (2) determine how agencies’ approaches for assigning and updating CIO ratings vary; and (3) describe the benefits and challenges associated with agencies’ approaches to the CIO rating. To establish the scope of our review, we downloaded and examined data on total IT spending for fiscal year 2011 for the 27 agencies reported on the IT Dashboard. (The Office of Management and Budget (OMB) extracts these data based on exhibit 300 forms submitted by each agency.) We then selected six agencies that spanned a range of IT spending for fiscal year 2011, including the three highest spending agencies, two of the lowest, and an agency in the middle. Collectively, these agencies accounted for approximately $51 billion, or 65 percent, of 2011 spending on IT investments. The six agencies are the Department of Defense, Department of Homeland Security, Department of Health and Human Services, Department of the Interior, National Science Foundation, and Office of Personnel Management. The results in this report represent only these agencies. To address the first objective, we downloaded and examined the Dashboard’s CIO ratings for all investments at the six agencies we selected (a total of approximately 308 investments reported by these agencies). To characterize the numbers and percentages of major IT investments at each risk level at each of our subject agencies, we analyzed, summarized, and—where appropriate—graphically depicted average CIO ratings for investments by agencies over time during the period from June 2009 to March 2012. Specifically, we compared the CIO ratings in June 2009 (or whenever an individual investment was first rated) up through and including each investment’s rating as of March 2012 and summarized the data by agency. To describe whether CIO ratings indicated higher or lower investment risk over time, we calculated the numbers and percentages of investments (by agency and collectively for all the agencies) that maintained a constant rating over the entire performance period, and those that experienced a change to their CIO rating in at least one rating period. Then we analyzed the subset of investments that experienced at least one changed rating and compared the first CIO rating with the latest CIO rating (no later than March 2012) to determine the numbers and percentages of investments (by agency and collectively for all the agencies) that experienced a net rating increase, a net rating decrease, or no net change. We also examined the comments provided with the ratings to determine whether such comments were useful in understanding the ratings. We presented our results to each agency and OMB and solicited their input, explanations for the results, and additional corroborating documentation, where appropriate. To address our second objective, we reviewed available documentation, obtained written responses to questions we posed to all agencies, and interviewed OMB and agency officials to determine their policies and practices related to assigning and updating the CIO ratings and related data for the Dashboard. Specifically, we gathered descriptions about the data, participants, and processes used to generate CIO ratings for investments; when and under what circumstances each agency updates its ratings; the specific factors agencies used in assigning their ratings; and the reason(s) for their approaches to assigning and reporting the ratings. We reviewed our results with agency officials to ensure that our presentation of their approach was accurate. In addition, we utilized our prior work and a report by the Department of Defense’s Office of the Inspector General related to the department’s major IT investments. We compared the findings in these reports to the CIO ratings the department submitted to the Dashboard for investments that had been rated consistently low or moderately low risk, and discussed our results with department officials. To address our third objective, we reviewed written and oral descriptions of the benefits and challenges that agencies and OMB have experienced in developing, submitting, updating, and utilizing CIO ratings. We sought specific examples, corroborating documentation, and causal factors, where available. After obtaining this information from individual agencies, we compared their responses to identify benefits and challenges common to multiple agencies and applied our judgment in determining whether any additional benefits or challenges were present. We conducted this performance audit from January 2012 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The table below lists the total number of major information technology (IT) investments rated on the federal IT Dashboard as of March 2012 for each agency selected for this review, with the numbers of investments rated at each of the risk levels specified by the Office of Management and Budget (OMB) for the chief information officer (CIO) rating. The last line in the table reports each agency’s total budget for fiscal year 2012 for their major IT investments, as also reported on the Dashboard in March 2012. This appendix provides additional information about chief information officer (CIO) ratings for major IT information technology (IT) investments at each of the agencies selected for this review. The first figure for each agency depicts the number of investments at each rating level for the end of each month, as reported on the federal IT Dashboard. The second figure depicts the number of investments whose risk level demonstrated a net increase, net decrease, no net change, or remained constant during the investment’s entire time on the Dashboard. In addition to the contact name above, the following staff also made key contributions to this report: Paula Moore (Assistant Director), Neil Doherty, Lynn Espedido, Rebecca Eyler, Kate Feild, Dan Gordon, Andrew Stavisky, Sonya Vartivarian, Shawn Ward, Kevin Walsh, Jessica Waselkow, and Monique Williams. | In June 2009, OMB launched the federal IT Dashboard, a public website that reports performance data for over 700 major IT investments that represent about $40 billion of the estimated $80 billion budgeted for IT in fiscal year 2012. The Dashboard is to provide transparency for these investments to aid public monitoring of government operations. It does so by reporting, among other things, how agency CIOs rate investment risk. GAO was asked to (1) characterize the CIO ratings for selected federal agencies' IT investments as reported over time on the Dashboard, (2) determine how agencies' approaches for assigning and updating CIO ratings vary, and (3) describe the benefits and challenges associated with agencies' approaches to the CIO rating. To do so, GAO selected six agencies spanning a range of 2011 IT spending levels and analyzed data reported for each of their investments on the Dashboard. GAO also interviewed agency officials and analyzed related documentation and written responses to questions about ratings and evaluation approaches, as well as agency views on the benefits and challenges related to the CIO rating. Chief Information Officers (CIO) at six federal agencies rated the majority of their information technology (IT) investments as low risk, and many ratings remained constant over time. Specifically, CIOs at the selected agencies rated a majority of investments listed on the federal IT Dashboard as low risk or moderately low risk from June 2009 through March 2012; at five of these agencies, these risk levels accounted for at least 66 percent of investments. These agencies also rated no more than 12 percent of their investments as high or moderately high risk, and two agencies (Department of Defense (DOD) and the National Science Foundation (NSF)) rated no investments at these risk levels. Over time, about 47 percent of the agencies' Dashboard investments received the same rating in every rating period. For ratings that changed, the Department of Homeland Security (DHS) and Office of Personnel Management (OPM) reported more investments with reduced risk when initial ratings were compared with those in March 2012; the other four agencies reported more investments with increased risk. In the past, the Office of Management and Budget (OMB) reported trends for risky IT investments needing management attention as part of its annual budget submission, but discontinued this reporting in fiscal year 2010. Agencies generally followed OMB's instructions for assigning CIO ratings, which included considering stakeholder input, updating ratings when new data become available, and applying OMB's six evaluation factors. DOD's ratings were unique in reflecting additional considerations, such as the likelihood of OMB review, and consequently DOD did not rate any of its investments as high risk. However, in selected cases, these ratings did not appropriately reflect significant cost, schedule, and performance issues reported by GAO and others. Moreover, DOD did not apply its own risk management guidance to the ratings, which reduces their value for investment management and oversight. Various benefits were associated with producing and reporting CIO ratings. Most agencies reported (1) increased quality of their performance data, (2) greater transparency and visibility of investments, and (3) increased focus on project management practices. Agencies also noted challenges, such as (1) the effort required to gather, validate, and gain internal approval for CIO ratings; and (2) obtaining information from OMB to execute required changes to the Dashboard. OMB has taken steps to improve its communications with agencies. GAO is recommending that OMB analyze agencies' investment risk over time as reflected in the Dashboard's CIO ratings and present its analysis with the President's annual budget submission, and that DOD ensure that its CIO ratings reflect available investment performance assessments and its risk management guidance. Both OMB and DOD concurred with our recommendations. |
As part of DOT, MARAD serves as the federal government’s disposal agent for government-owned merchant vessels weighing 1,500 gross tons or more. MARAD’s ship disposal program, in the Office of Ship Operations, is responsible for disposing of these vessels. Historically, MARAD has disposed of its obsolete ships primarily by selling them to overseas scrapping companies. From 1983 to 1994, MARAD scrapped over 200 vessels through overseas sales, which represented close to 100 percent of all of MARAD’s scrapping activity. Ships were sold “as is/where is” to the highest bidder. The sale of vessels for overseas scrapping was curtailed in 1994 because of concerns raised by EPA about the presence of PCBs in various shipboard components. The Toxic Substances Control Act and EPA’s implementing regulations govern the use of PCBs. According to MARAD, the act and EPA regulations limit MARAD’s ability to export vessels for disposal without first removing regulated PCBs. Ship scrapping is also subject to other federal, state, and local government laws that are meant to protect the environment and ensure worker safety. In addition, overseas disposal can be more complicated and time consuming because it requires the involvement of foreign governmental agencies and is subject to additional laws related to exporting hazardous materials. Similarly, disposing of ships through artificial reefing also requires coordination with several federal agencies. After overseas sales were curtailed in 1994 and halted in 1998, MARAD had little success in selling its obsolete ships domestically, leading to a backlog of ships awaiting disposal. At the same time, the fleet had several well-publicized leaks, which raised concerns about the risk of continued storage. . . . in the manner that provides the best value to the Government, except in any case in which obtaining the best value would require towing a vessel and such towing poses a serious threat to the environment; and . . .through qualified scrapping facilities, using the most expeditious scrapping methodology and location practicable. Scrapping facilities shall be selected…on a best value basis consistent with the Federal Acquisition Regulation, as in effect on the date of the enactment of this Act, without any predisposition toward foreign or domestic facilities taking into consideration, among other things, the ability of facilities to scrap vessels— (1) at least cost to the Government; (2) in a timely manner; (3) giving consideration to worker safety and the environment; and (4) in a manner that minimizes the geographic distance that a vessel must be towed when towing a vessel poses a serious threat to the environment. The 2001 Authorization Act also required MARAD, within 6 months of its enactment, to provide Congress with a report on its program for disposing of ships and subsequent progress reports every 6 months thereafter. As of September 2004, MARAD had submitted two reports to Congress. In its first report to Congress, issued in April 2001, MARAD stated that its primary goal for the ship scrapping program was to meet the statutory deadline. The report also provided a plan to dispose of all ships that MARAD expected would be in its inventory through the deadline. Highlights from the report included that MARAD would use fiscal years 2001 and 2002 to refine cost estimates specific to merchant type ships, and from fiscal years 2003 through 2006, would dispose of 35 ships per year mostly through domestic scrapping. MARAD, at that time, estimated that it would be able to scrap 140 ships at an average cost of $2.5 million per ship and donate or reef 15 ships by the 2006 deadline. Also, in accordance with the statute, MARAD developed milestone dates for disposal of each ship and developed an approach that focused disposal efforts on its highest priority ships (ships in the worst condition) considering the condition of the vessel hulls; the amount, type, and location of potential pollutants on board; and the vessel spill history. MARAD stated it recognized that the immediate threat that these high-priority ships posed at the sites, in all likelihood, would result in using the domestic scrapping industry in the near term, and stated it would continue to seek innovative solutions to the challenging issue of ship disposal. MARAD also stated that, while there was much scrapping capacity overseas, exporting ships was banned by the Toxic Substances Control Act because PCBs can be found in shipboard systems. The second report to Congress, issued in June 2002, indicated that MARAD no longer expected that it could meet the statutory deadline of September 30, 2006, if it used domestic scrapping as the predominate disposal method because no funds had been appropriated for the program in fiscal year 2002 and that the prospects for future funding were considered uncertain. The report also discussed an additional planned procurement method to contracting by negotiation, which was the use of PRDAs. At the time the 2006 deadline was set, the reserve fleet consisted of 115 vessels designated as obsolete and available for disposal. Of these, 40 were considered high priority for disposal because of their deteriorated condition. MARAD projected that another 40 ships would enter the fleet, for a total of 155 ships that it expected would need disposal. These ships are located at MARAD’s three anchorages: with the James River Reserve Fleet (near Fort Eustis, Virginia) having the most ships and most of the highest priority ships; the Beaumont Reserve Fleet (Beaumont, Texas); and the Suisun Bay Reserve Fleet (near Benicia, California). In disposing of its nonretention vessels, MARAD has usually had its excess ships dismantled, or scrapped—a labor intensive approach that poses certain environmental and worker safety risks. Ships are normally dismantled from the top down and from one end to the other, using torches and/or shears to cut away large parts of the vessel. Cranes are often used to move larger metal pieces to the ground where they can be cut into the shapes and sizes required by the foundry or smelter where the scrap will be sent. The scrapping process produces some products, such as steel and other metals, that can be sold to recyclers. Remediation of hazardous materials, such as asbestos, PCBs, lead, mercury, and cadmium, takes place before, as well as during, the dismantling process. If it is not done properly, ship scrapping can pollute the ground and water around the scrapping site and jeopardize the health and safety of the workers involved in the scrapping process. The following figures illustrate various stages of the scrapping process. MARAD is unlikely to meet the statutory deadline of September 30, 2006, to dispose of its inventory of obsolete ships. Since October 2000, when Congress established the deadline, MARAD has disposed of only 18 ships, or about 12 percent of its inventory. There still remain more than 100 ships still needing disposal. MARAD’s current approach, which has resulted in an average of about 5 ships disposed of per year, has not been sufficient to meet the deadline. The ship disposal program’s slow progress stems primarily from program leaders not establishing a comprehensive management approach that better focuses the program’s efforts on meeting the myriad of challenges that the program faces in eliminating its inventory in a timely and efficient manner. Key elements necessary for effective program management that are missing or inadequate include (1) no integrated strategy or milestones for meeting the 2006 deadline; (2) no identification of funding resources needed to meet the 2006 deadline; (3) inadequate performance measures; (4) inadequate identification of the legal, regulatory, and environmental external impediments that could impact progress; (5) inadequate formal decision-making framework; and (6) no formal program evaluations. In the absence of a comprehensive management approach that includes all of these key elements, MARAD’s ship disposal program lacks the vision needed to sustain a long-term effort. MARAD has also not provided Congress with all of the required reports on the program’s progress. As a result, MARAD has not been able to ensure Congress that it can dispose of its obsolete ships in a timely way. Since October 2000, when Congress specifically authorized MARAD to pay for ship disposal services and set a September 30, 2006, deadline to dispose of all vessels, the agency has made slow progress toward achieving this goal. In 2000, MARAD reported that it had 115 ships in its inventory. Between October 2000 and September 2004, MARAD received 42 additional ships through transfers, bringing the total number of ships that needed to be disposed of to 157 (the beginning inventory of 115 plus 42 transfers). Of these 157 ships, 18 ships, or about 12 percent of the inventory (as of September 2004), had been disposed of, leaving 139 still in the inventory (see table 1). Of the remaining 139 ships, as of September 2004, MARAD had awarded contracts for the disposal of another 29, leaving 110 ships that were still awaiting disposal actions. The status of the 29 ships under contract is as follows: Twenty ships are waiting to be moved to a scrapping company. Of these, 9 are awaiting a court ruling on whether MARAD will be able to export them to the United Kingdom. Four ships have been towed but have not begun scrapping. These ships were towed in October 2003 to the United Kingdom where they are waiting for a U.K. company to obtain the proper permits to scrap them. Five ships are either at scrapping facilities in the process of being dismantled or are en route to scrapping facilities. Of the ships that have been part of MARAD’s inventory since October 2000, more than 40 ships have been designated as high priority for disposal because of their severely deteriorating conditions. Ships in this category have had known holes in their underwater hulls that may or may not have been patched, and the potential for additional holes is considered to be moderate or high. Consequently, these ships are considered to pose the most immediate threat to the environment. Figure 4 shows (center) one of the high-priority ships in the James River Fleet awaiting disposal; MARAD awarded a contract to dispose of this ship in September 2004, but the ship had not yet been removed from the fleet. Based on the average rate of ship disposal of about 5 per year, it is unlikely that MARAD will be able to get rid of the 110 obsolete ships that were in its September 2004 inventory by the 2006 deadline (assuming that all of the ships already under contract are disposed of by then). MARAD requested and received $21.6 million to dispose of 15 ships in the fiscal year 2005 budget cycle and a yet-unspecified amount in its fiscal year 2006 budget. At the same time, MARAD expects to receive up to 30 more obsolete ships through transfers during the next 2 years. As table 2 shows, we estimate that MARAD will likely have more than 100 obsolete ships in its inventory in September 2006. Sound management principles, such as those embodied in the Government Performance and Results Act and used by leading organizations, include the need for developing approaches to meet program goals, measuring performance, identifying resource requirements, and reporting on the degree to which goals have been met. Combined with effective leadership, these elements provide decision makers with a framework to guide program efforts and the means to determine if these efforts are achieving the desired results. While MARAD has adhered to some of these principles, for example, by including the ship disposal program in its strategic and performance planning process, program leaders have not developed a comprehensive approach to better focus its efforts on overcoming the challenges related to eliminating its obsolete ship inventory in a timely manner. Key elements necessary for effective program management that are missing or inadequate include (1) an integrated strategy and milestones for meeting the 2006 deadline; (2) an identification of total resources needed to achieve the program’s goals; (3) measures of progress toward achieving the goals; (4) identification of external factors, particularly those related to legal, regulatory, and environmental issues, which could impact the program’s progress, and strategies to mitigate these factors; (5) a decision-making framework; and (6) an evaluation and corrective action plan. Although the 2001 Defense Authorization Act required MARAD to submit a progress report on the program to Congress within 1 year of its enactment and every 6 months thereafter, the agency had submitted only three reports through December 2004. The following discussion focuses on the key elements that are missing or inadequate in MARAD’s management approach. Integrated strategy to meet stated goals and milestones for completing ship disposal. Leading organizations have an integrated strategy that identifies program goals and specifies an approach and a timetable for completing the goals. While the program has a requirement to dispose of its entire inventory by September 2006 and MARAD has stated this requirement as one of its program goals, the program does not have a current strategy to achieve this requirement using the available disposal methods (e.g., domestic and overseas scrapping, sales, artificial reefing, deep-water sinking, or donations). In its 2001 report to Congress, MARAD proposed a general strategy for meeting the deadline by identifying 140 ships that could be scrapped through service contracts and 15 ships that could be disposed of by donations or artificial reefing. However, MARAD abandoned this approach in fiscal year 2002 when the program did not receive any funding, and since that time, it has not developed a new integrated strategy for disposing of all ships. Instead, MARAD officials told us that their current strategy consists of a market-based approach that is responsive to the current proposals made by interested parties. Based on these proposals, specific ships are matched to the various available disposal methods. These officials also stated that, in the rare instances where competing proposals exist for the same ship, MARAD makes a decision based on best value/best interest to the government. However, because this situation occurs infrequently and because the factors that need to be considered, such as cost and timing, vary greatly, MARAD does not document this decision-making process. Moreover, while MARAD has identified the expeditious disposal of high-priority ships (because of their poor condition) as another program goal, it has not always matched its planned disposal methods to this goal. For example, while past MARAD reports and briefings identified domestic disposal as the most expeditious method and indicated that this method would likely be used for some of its high- priority ships to minimize towing distance, MARAD awarded a ship disposal contract in 2003 to an overseas firm that included many high- priority ships, even though export was considered more complicated and time consuming. To determine the feasibility and advisability of exporting ships overseas for scrapping, in 2002, Congress directed MARAD to conduct at least one overseas pilot program for up to four ships. As part of this effort, MARAD worked with EPA to determine the circumstances under which the export of certain ships would be allowed. Subsequently, in 2003, MARAD awarded a scrapping services contract covering 13 ships, which was 9 more than authorized by the pilot program, and included 10 that were considered to be high priority for disposal, to a company in the United Kingdom. Prior to selecting the ships to be included in the overseas contract, MARAD did not determine the high-priority ships’ suitability for being towed across the ocean. According to program documents, one of the high-priority ships initially proposed for contract inclusion sprung a leak just before the contract was signed and was replaced by another ship. Subsequently, a citizen’s group lawsuit led to a U.S. court limiting the number of ships that MARAD could initially export to 4 (consistent with the number that would comprise the pilot program). MARAD selected 4 ships for export that were among the ships in the best condition included in the contract, in part, because they could be prepared for towing the quickest, according to a program official. As a result, 7 of the highest priority ships included in the overseas contract remained at the fleet. In the following year, from June through September 2004, MARAD included these 7 high-priority ships (originally part of the contract to the overseas firm) in contract awards to domestic companies, to hasten their departure from the fleet since the court had not yet determined if these ships could be exported overseas. In addition, MARAD does not have specific milestones to dispose of its entire inventory of obsolete ships. MARAD’s 2001 progress report to Congress outlined an approach to meet the 2006 deadline, however, in its 2002 report to Congress, MARAD expressed concerns that it could not meet the deadline but did not provide a timetable for what it could achieve. In a 2004 progress report to Congress, MARAD proposed an alternative plan to meeting the statutory 2006 deadline. Instead of eliminating the entire obsolete ship inventory, MARAD suggested that it would dispose of the remaining ships in its inventory at a rate that would exceed the number of new vessels entering the fleet. MARAD would work toward an “end-state” with a target goal of eliminating the backlog of vessels that accumulated in the 1990s by September 30, 2006. MARAD’s proposal would include removing all “high” and “moderate” priority ships (about 65) at a rate of 20 to 24 ships per year and keeping only “low” priority ships at the fleet sites. However, for fiscal year 2005, MARAD is planning to dispose of only 15 ships with the $21.6 million that was appropriated. Identification of resources needed to achieve goals. Good management principles call for the identification of resources, including funding, which are needed to accomplish the expected level of performance. In its 2001 report to Congress, MARAD provided a general estimate of costs to dispose of its inventory of 155 ships by the 2006 deadline, and it stated that it planned to further refine cost estimates as additional data relating to merchant-type vessels were collected during fiscal years 2001 and 2002. However, these costs were not converted into a long-term funding plan linked to disposing of all obsolete ships by 2006. In addition, MARAD did not revise its cost estimates based on actual contracting experiences. For example, the 2001 report estimated that it would cost about $350 million to scrap 140 of the 155 vessels—an average of about $2.5 million per ship—using ship scrapping services contracts. However, MARAD’s budget requests for ship disposal for fiscal years 2002 through 2005 have totaled only $54.1 million, about one-sixth of the $350 million estimate. Congress has appropriated a total of $78.8 million over the same period. Table 3 shows MARAD’s annual budget requests, associated appropriations, and the difference between the two. MARAD officials said they did not incorporate the estimated costs to achieve the 2006 deadline into a funding plan because MARAD did not believe that Congress would fund the levels identified in its 2001 report and because they believed that environmentally sound, qualified foreign facilities that could scrap ships for less than the $350 million estimate existed. Instead, MARAD officials said that their budget requests reflected a consistent funding level that they believed Congress would support; recognized the limited capacity of the most expeditious method (scrapping at domestic facilities); allowed MARAD to eliminate high-priority ships prior to the 2006 deadline; and provided a sufficient disposal rate while MARAD investigated and pursued potentially more cost-effective overseas alternatives. Appropriate performance measures. Although DOT’s and MARAD’s performance plans have tracked the ship disposal program’s progress since 2001, the department-level performance measures that are being used are not linked to the program’s goal of disposing of all obsolete ships by September 30, 2006. For example, MARAD’s 2004 performance plan links the ship disposal program to DOT’s facility cleanup performance goal, which aims to ensure that DOT operations “leave no significant environmental damage behind.” To measure progress toward the facility cleanup goal, MARAD’s performance measure uses the number of vessels that have been physically removed from the fleet for subsequent disposal rather than the number of ships that have been completely disposed of as called for by statute. MARAD officials stated that the reason the chosen performance measure is not more directly linked to the statutory requirement is because of MARAD’s recognition, in 2002, that the deadline was unachievable due to the program’s inconsistent funding and disposal impediments. A MARAD official stated that tracking the number of removed ships as a performance measure is appropriate because removing ships contributes to the facility cleanup goal. However, using removal rather than disposal as a performance measure may obscure MARAD’s actual progress toward achieving the disposal deadline cited in the statute. For example, MARAD counted toward meeting its fiscal year 2004 removal target the four ships that were towed to the United Kingdom in October 2003, but, as of November 2004, these ships were still fully intact and awaiting permit approvals before scrapping could begin. In another example, MARAD program officials reported to senior managers in March 2004 that the program had exceeded its fiscal year 2004 ship performance target for removals by 10 ships. However, 6 months later, only 3 of these ships had been completely disposed of. MARAD officials stated that they also use other measures not tracked in its performance plan. For example, the number of contracts awarded and ships disposed of are recorded continuously and frequently communicated to program officials. While we found that MARAD does collect the data, this information is not reported against established targets in its performance reports, reports to Congress, or budget requests making assessing program progress difficult. Finally, MARAD’s performance targets are set too low to complete the disposal of the 155 obsolete ships in MARAD’s 2001 inventory by the 2006 deadline. As table 4 shows, MARAD’s projected performance targets for the ship disposal program were to remove a total of 29 ships—less than 20 percent of the inventory—from the fleet for subsequent disposal from fiscal year 2001 to 2005. MARAD officials acknowledged that while the targets were too low to meet the statutory deadline, they were more realistic and achievable given the program’s constraints such as unpredictable funding. These officials stated that a more meaningful goal related to the statutory requirement would be to dispose of as many ships that represented the greatest risk to the environment, as possible, given the available resources. MARAD stated that it has reported this goal to focus on high-priority ships in its budget request and reports to Congress. While MARAD has stated this general goal, it has not developed specific performance measures with targets to track its progress toward achieving this goal. External factors and mitigation plans. Good management practices include identifying external factors that may be impediments to program success and actions needed to mitigate these impediments. MARAD has cited a number of external factors that provide challenges to the ship disposal program in briefings and in some of its strategic planning documents. These challenges include domestic disposal capacity limitations; environmental, legal, and regulatory restrictions on export; and similar restrictions on other disposal options. Specifically, MARAD has stated that the existing domestic ship recycling capacity is very limited and must serve both MARAD’s and the Navy’s needs. Additionally, foreign disposal remains a challenge because of the Toxic Substances Control Act prohibition on the export of PCBs, the extensive regulatory requirements to obtain an exemption from the act, and legal challenges. As a result, MARAD has concluded that export is not commercially viable for ships containing PCBs. MARAD has stated that other options, such as artificial reefing, donations, and deep sinking of vessels, are also limited, in part, by the cost of preparing ships to meet environmental requirements. However, MARAD’s plans do not clearly describe the linkage between these factors and the program goals they impede or specify how their impact can be reduced. For example, while MARAD’s strategic plan cites the lack of domestic disposal opportunities as an impediment to the program, the plan is not clear on how this impediment would keep MARAD from meeting its stated goal of completing the disposal of all of its high-priority ships or its target of removing 4 to 15 ships per year. Also, the plan does not specify the actions that MARAD might take that could increase domestic capacity or foster existing capacity. In addition, MARAD has taken a number of actions to address impediments related to overseas scrapping, artificial reefing, donations, and deep-water sinking, which may lead to some progress in the future. However, these actions do not appear to have been done in a systematic manner nor linked to specific program goals. Such an effort would allow MARAD to systematically identify and assess the factors that pose risk to the program, and would allow MARAD to prioritize its actions in order to increase the likelihood that its actions could successfully influence the factors that impede the program from meeting its goals. Until MARAD develops a process that focuses its actions on long-term program goals and appropriate performance measures, it will be difficult for MARAD to assess how external factors may impede program goals and the actions needed to reduce them. Formal decision-making framework. Successful organizations establish a decision-making framework that encourages the appropriate level of management review and approval, supported by the proper technical and risk analyses. A well-thought-out review and approval framework can mean program decisions are made more efficiently and are supported by better information. Some leading organizations have review processes in place that determine the level of analysis and review that will be conducted based on the size, complexity, and cost of the project. Projects that are crucial to the program’s strategy usually require more analysis, support, and review than projects that have less organizationwide impact. We found that MARAD’s decision-making framework lacks many of these elements. For example, MARAD does not have a formal decision-making process that specifies how program oversight is to be provided and what decisions need to be reviewed by senior leadership, and it has no formal program documents that describe how the various offices will interact. MARAD has some policies that generally describe roles and responsibilities for key offices involved with ship disposal. According to these policies, the MARAD Administrator is to provide general direction and supervision to the Associate Administrator for National Security, whose responsibilities include the executive direction of the Office of Ship Operations. Within this office is the Ship Disposal Program Office that, in coordination with other offices, develops and administers the ship disposal program. MARAD officials stated that while the process is not well documented, they believed that program participants understood their roles and responsibilities and that senior management is aware of issues affecting the program. MARAD does not follow a formal process that uses written guidance and does not have an approved program plan that addresses all elements of the program. In addition, MARAD program officials could not provide us with analytical results to support key program decisions. For example, MARAD did not have an analysis to support its position that domestic ship scrapping capacity is limited, which led it to consider foreign scrapping capacity to be of greater importance to the program. MARAD officials stated that the capacity and capabilities of the domestic industry were obvious through the data associated with the industry responses to disposal solicitations. These officials stated that ship disposal is not a growth industry in the United States and a formal capacity analysis would not have benefited the program because the results would have been largely theoretical. Instead, these officials told us that the domestic industry’s cost-effective capacity is evident through the proposals received during disposal solicitations. We disagree that there would be no benefit to conducting a capacity analysis. Such an analysis could provide the basis to determine throughput levels for planning purposes in developing funding plans for the program. While MARAD stated that ship disposal is not a growth industry, in 2004, MARAD awarded contracts to two firms that had not participated in past solicitations. Program evaluation and corrective action plans. Program evaluations are defined as objective and formal assessments of the results, impact, or effects of a program or policy. Such information can be used to assess the extent to which performance goals are met and identify appropriate corrective actions for achieving unmet goals. While MARAD had not performed an evaluation since it received its new authority in fiscal year 2001, officials stated that the agency initiated the program’s first evaluation in June 2004 and expects to complete it as early as January 2005. This evaluation could identify any corrective actions that may be needed for the program to improve its performance. Periodic progress reports to Congress. The 2001 Defense Authorization Act required MARAD, within 6 months of its enactment, to provide Congress with an initial report on the disposal program and to submit progress reports every 6 months thereafter. Since 2000, MARAD had submitted only three out of a possible eight reports that were required to communicate the program’s status through December 2004. In April 2001, MARAD provided its initial report to Congress addressing aspects of its plan. In June 2002, it submitted a second report updating the program’s status since the 2001 report. A third report, which had been in draft format for over a year, was submitted to Congress in October 2004. Failure to provide these reports has left Congress without information that could be useful in its decision- making process. As a result of weaknesses in MARAD’s management approach, the program lacks a clear vision to guide program decision making concerning the ship disposal program. Missing management elements such as the lack of an integrated strategy, resource identification, and inadequate performance measures reflect this lack of a vision and undermine MARAD’s efforts to sustain a long-term effort. In addition, MARAD has not been able to provide Congress and other stakeholders with a reasonable timetable and the associated annual funding requirements needed to meet the 2006 deadline, nor has it clearly articulated the areas that congressional assistance may be needed to expedite the disposal of these deteriorating ships that continue to pose potentially costly environmental threats to the waterways near the sites where these ships are stored. Although Congress directed MARAD in fiscal year 2001 to consider alternative methods in designing its ship scrapping program, the program has made only limited use of these methods—artificial reefing, deep-sea sinking, and donations—because of a number of environmental, financial, and legislative barriers. With the support of Congress, MARAD has recently taken a number of actions to address these impediments. Despite these positive steps, MARAD may still be years away from increasing the number of disposals using these alternative methods. MARAD has not developed an overall plan that could increase the use of alternative disposal methods. In fiscal year 2001, Congress directed MARAD to consider alternative methods in designing its ship scrapping program. However, MARAD has used an alternative method for ship disposal—artificial reefing—only once since that time, and work on this disposal action started prior to 2001 (see table 5). At the same time, MARAD has not yet disposed of any ships through deep-water sinking or ship donations. Instead, MARAD has focused on ship scrapping—either by awarding contracts or selling the ships to scrapping firms to dismantle them—to dispose of 17 of the 18 ships for which it has completed disposal actions. MARAD officials told us they are currently reviewing applications to dispose of 5 ships through artificial reefing and are holding 4 ships for donations, although few of these actions are likely to be completed by the statutory deadline. Recently, in response to congressional direction, MARAD has taken a number of steps to address several barriers that have limited its use of alternative disposal methods. These barriers have included environmental factors related to the removal of hazardous materials (remediation) from obsolete ships, the financial costs to remediate these ships, and the legislative barriers to donating ships to historical organizations. MARAD’s actions to facilitate the use of alternative methods are discussed below. MARAD officials have identified artificial reefing as having the greatest potential for use among the alternative methods and are currently evaluating reefing applications from four states that cover five ships. MARAD officials are optimistic that one of the five ships being evaluated for reefing may be sunk as early as 2005. Under the artificial reef program, MARAD transfers obsolete ships to states or other jurisdictions to be submerged as part of a state-managed program to build artificial reefs that benefit marine life, commercial and sport fishing, and recreational diving. From 1973 to 1992, MARAD transferred 46 ships to coastal states to be used as artificial reefs but, since 2001, it has disposed of only one ship through reefing, partly because of unresolved environmental issues. MARAD has identified several obstacles that have hindered its ability to use reefing, and the agency has recently taken some actions toward facilitating the use of this method. Four of the obstacles and MARAD’s actions include: Lack of national environmental standards to prepare ships for artificial reefing. According to MARAD officials, concerns about environmental contamination, especially PCBs, have stifled the artificial reefing program in recent years, and plans for preparing vessels for reefing have been complicated by the lack of consistent standards for environmental remediation. Congress, in 2002, directed MARAD and EPA to jointly develop best management practices (national guidelines) for preparing ships for the artificial reef program. In June 2004, EPA published draft national guidelines. The guidelines require, among other things, the removal of PCBs greater than 50 parts per million throughout a ship and asbestos in areas of a vessel that could be disturbed by explosives used to sink the vessel. Once adopted, the guidelines should provide MARAD and the states participating in the artificial reef program with clear criteria for removing hazardous materials from ships. Cost of preparing/remediating vessels. According to MARAD officials, the states have been reluctant to take on the responsibility of towing, preparing, and sinking ships for artificial reefing because of the potentially high costs they could incur. To address this issue, MARAD requested and in 2002 Congress provided it with authority to provide financial assistance to states to tow, prepare, and sink reef candidates. Since 2002, MARAD has received applications from four states to sink a total of five ships, one possibly as early as 2005. Other applications have been delayed, partly because of the lack of funding to prepare ships for reefing. Need to streamline application process. To sink obsolete ships to form an artificial reef, MARAD and the states have to coordinate their efforts with a number of government agencies, including the U.S. Army Corps of Engineers, the U.S. Coast Guard, and EPA. According to MARAD, states typically require about 9 months to complete this coordination. MARAD and other agencies have been working to streamline the process. For example, the Navy and MARAD have established a joint reef application process for soliciting, receiving, and evaluating applications from interested states. The joint process will allow MARAD and the Navy to share resources to achieve common reefing goals. Limitation of program to the United States. Prior to fiscal year 2004, reefing candidates were restricted to state governments within the United States. MARAD requested and received congressional authorization in fiscal year 2004 to accept applications from U.S. territories and foreign governments for reefing. MARAD has received several inquiries since it received this new authority. For example, MARAD has had significant interest from the Cayman Islands for a reefing project. MARAD has taken actions that could result in it disposing of a few of its ships through the Navy’s Sinking Exercise (SINKEX) program, which involves sinking ships in deep water for weapons development testing and evaluation and for fleet training exercises. In September 2003, MARAD and the Navy developed a memorandum of agreement to include ships in MARAD’s inventory in the Navy’s SINKEX program. According to MARAD, as with the other disposal methods, deep-water sinking requires the removal of environmentally hazardous materials from ships before they are sunk. According to MARAD and discussions with the Navy, most of MARAD’s high-priority ships do not meet the Navy’s needs because of their advanced deterioration. As a result, MARAD considers deep-water sinking a low-volume option even though the estimated costs are lower than scrapping. MARAD has set a goal of disposing of one or two ships a year through this method. However, the one ship that was scheduled, and had been prepared by the Navy for deep-water sinking, had to be withdrawn when it was determined that the ship had historical significance and thus was not suitable for the Navy’s program. As a result of recent congressional action, MARAD’s use of a third alternative disposal method—the ship donation program—may increase in the future. Until 2003, MARAD could donate ships to qualified groups only through special congressional legislation that designated a specific ship, the recipient, and the conditions under which the donation would take place. According to MARAD officials, the agency has set aside four ships under this process, making them ineligible for disposal through other methods unless their condition deteriorates. This process has been lengthy, primarily because recipient groups needed time to raise money to acquire, restore, and operate a ship for its intended use. In some cases, ships that are on hold for a donation may be removed because the recipient group has not been able to make significant progress to complete the donation requirements. As table 6 shows, two ships (Hoist and Sphinx) are currently in this status. At MARAD’s request, Congress, in 2003, gave the agency authorization to establish a donation program that would allow it to donate ships directly to groups interested in acquiring and preserving ships that have historic significance. This change would eliminate the time-consuming procedure of interested groups needing special legislation for each donation. MARAD established its donation program in July 2004 and indicated that accepted applications would be valid for 1 year with two 6-month extensions possible based on an applicant’s progress in meeting the milestones presented in its business plan. However, the program does not have the authority to provide potential recipients with direct financial assistance—a barrier to progress in the past. MARAD officials have said that the high costs associated with the indefinite preservation of ships for historical purposes make it unlikely that the agency could provide significant assistance, even if the authority was provided. Despite the steps it has taken recently, MARAD faces some additional challenges to using alternative disposal methods. Since its first report to Congress in 2001, MARAD has not conducted a systematic assessment of its ship inventory to determine the most cost-effective and efficient method for getting rid of individual ships. In addition, it has not developed policies that would enhance the use of alternative disposal methods. For example, while MARAD now has authority to share artificial reefing costs and can provide funds to remediate ships in its inventory, it does not have specific policies to guide it in determining how much funding it should set aside for cost sharing or for remediating ships to make them more readily transferable as part of the artificial reefing or donation program. MARAD officials told us that their approach is to make all ships available for all disposal methods with few exceptions. According to these officials, this approach allows states and historical preservation groups to select ships according to their preferences without any restrictions. In the past, this approach has led to some cases in which recipient groups selected ships that were in the worst condition. For example, two of the four ships that have been placed on hold for donation have had poor hulls, and MARAD officials plan to remove these from hold status. Because of the lack of an overall approach, MARAD may be losing opportunities to dispose of more of its obsolete ships in the most cost-effective and expeditious manner. Since fiscal year 2002, MARAD has inappropriately used a procurement method—PRDA—to acquire most of its ship disposal services rather than other procurement methods that are appropriate for acquiring such services. According to MARAD, PRDAs are a variant of broad agency announcements, an authorized procurement process under the FAR and the Competition in Contracting Act of 1984 (CICA). PRDAs are designed to enable federal agencies to acquire basic and applied research and that part of development not related to the development of a specific system or hardware procurement. MARAD officials cited several reasons for using PRDAs, including seeking innovative solutions to ship disposal, attracting more industry proposals, and reducing costs. Our analysis of MARAD’s PRDAs and contracts awarded under PRDAs through February 2004 showed that MARAD was not using PRDAs to acquire research or development but to procure conventional ship scrapping services. In addition to being inconsistent with CICA and the FAR, MARAD’s use of PRDAs to acquire ship scrapping services has led to a lack of transparency and raised questions about the fairness of MARAD’s contract award process. According to MARAD, PRDAs are a variant of broad agency announcements and meet the requirements of the FAR for the use of these announcements. MARAD officials explained that the agency chose to use PRDAs for a number of reasons. MARAD officials said that they have used PRDAs to seek innovative, private-sector solutions for controlling ship disposal costs and, at the same time, to gain insights into domestic and international dismantling and recycling market costs. MARAD officials pointed out that market cost data were particularly important when the program did not receive appropriated funds in fiscal year 2002. According to MARAD, the use of PRDAs provides greater flexibility as it allows interested parties to propose methodologies that are broader than those received in response to other solicitations. In addition, MARAD officials said that they used PRDAs to attract a larger number of proposals from qualified firms than they had received under other methods. MARAD officials also told us that their use of PRDAs had contributed to the significant lowering of disposal prices through greater industry participation and increased competition. Since fiscal year 2002, MARAD has used PRDAs as its primary procurement method. According to MARAD, the agency has used PRDAs to solicit proposals for the disposal of over 130 different obsolete ships and has awarded contracts for 34 of those ships. In addition to PRDAs, MARAD has used other procurement methods—contracting by negotiation, under which an agency issues a request for proposals, and sealed bidding, under which an agency issues an invitation for bids—to award contracts to scrap 6 ships in 2001 and 4 ships in 2003. Under the FAR, broad agency announcements are used to acquire basic and applied research and that part of development not related to the development of a specific system or hardware procurement. Agencies can use this method to fulfill their requirements for scientific study and experimentation directed toward advancing the state of the art or increasing knowledge or understanding. According to the FAR, agencies should use this method only when meaningful proposals with varying technical approaches can be expected. Although the FAR considers broad agency announcements a competitive procurement method, proposals do not have to be evaluated against one another since they are not submitted against a common work statement. Regardless of MARAD’s stated purposes for using PRDAs, their use must be consistent with CICA and the FAR. ompetitive selection of basic and applied research and that part of development not related to the development of a specific system or hardware procurement is a competitive procedure if award results from—(i) A broad agency announcement that is general in nature identifying areas of research interest, including criteria for selection of proposals, and soliciting the participation of all offerors capable of satisfying the Government’s needs; and (ii) A peer or scientific review. The FAR, in this respect, implements the requirements of CICA by allowing agencies to use the broad agency announcement as a means to obtain research or development. It follows, and we conclude, that an agency may only use broad agency announcements, or any variant of that process, to acquire research or development in order to comply with CICA and the FAR. Our analysis indicates that MARAD is not using broad agency announcements, or PRDAs, to acquire research or development; rather it is inappropriately using them to acquire ship scrapping services. An appropriate use of PRDAs would have allowed MARAD to solicit proposals and award contracts for research or development that sought to advance the state of the art or increase knowledge. In other words, MARAD could have sought innovation in the ship scrapping industry through research or development contracts. PRDAs we reviewed, however, did not seek proposals to perform research or development. Rather, along with innovative approaches for ship disposal, the PRDAs indicated that proposals should address environmental and worker safety considerations, production throughput/capacity, experience with ship disposal, and funding requirements. With respect to funding, the PRDAs did not ask firms to explain their costs to research or develop new methods or approaches for ship disposal; instead, the PRDAs specified that funding “must be proposed in sufficient detail to show all anticipated costs associated with the complete dismantlement of the vessel(s), including cost categories such as towing, remediation of hazardous materials, labor costs, etc., as appropriate.” Thus, rather than soliciting proposals to perform research or to develop new methods or technologies to scrap ships, MARAD’s PRDAs essentially contemplate the award of production contracts to firms with ship disposal experience. In addition, the results of MARAD’s evaluation approach appeared to give greater weight to the disposal of ships rather than obtaining innovation, research, or development. In our review of MARAD’s evaluations for more than 70 proposals submitted under PRDAs from November 2001 through March 2003, we found that MARAD had not accepted proposals that were identified in program evaluations as having innovative approaches or research or development. For example, one evaluation summary stated that the proposal was rejected because it would not result in the disposal of ships but only in the testing of a hazardous material remediation technology and that PRDAs were not intended to solicit proposals of untested technologies. Another evaluation summary stated that only one ship would be disposed with no cost advantage to MARAD and with no guarantee of discovering methods or efficiencies that could be applied to the ship dismantlement industry to MARAD’s benefit. On the other hand, proposals that MARAD provisionally accepted for further consideration were described in the evaluations as providing conventional ship dismantling or recycling services and in many instances acknowledged that the proposals did not contain innovations. We also found that MARAD did not award contracts that required innovation or research or development. Our analysis of six contracts awarded under PRDAs through February 2004 also showed that none of the contracts specifically required innovative approaches in disposing of ships. Instead, the contracts provided for conventional ship scrapping services, with requirements, time lines, schedules, costs, and objectives clearly spelled out, contrary to most research and development contracts. For example, a contract between MARAD and Post-SVC Remediation Partners, which was effective July 25, 2003, specifies that the contractor is to tow 13 vessels from their current location (at the James River Reserve Fleet and Portsmouth Naval Base) to the United Kingdom and completely “dispose of, dismantle and remediate. . . vessels by 31 December 2005.” MARAD officials stated that their dismantling contracts contained performance schedules and outcomes as part of the government’s responsibility to monitor contractor performance; however, these officials stated that MARAD did not mandate the methodology by which the contractor was to dispose of the vessels. MARAD officials stated that because their contracts are performance based, the contractors are responsible for determining how they will comply with the terms of the contract, thereby giving them flexibility during the dismantling operations. While we agree that the use of a research or development contract would not alleviate the government’s responsibility for oversight, we do not believe that MARAD acquired the innovative service it indicated PRDAs were designed to obtain. In our view, the contracts were for ship scrapping. In addition, MARAD may not need to specifically acquire innovation to expeditiously scrap ships. Industry representatives told us the process of scrapping a ship does not require innovative approaches. Industry officials said that ship scrapping is a fairly straightforward activity, and most companies are using similar techniques to dispose of ships. They pointed out that the basic technology for scrapping a ship involves removing environmentally hazardous materials and then dismantling the ship with a cutting torch. They said that while companies might use different processes, the core technology did not change. The representatives said that they had not developed any innovative approaches in response to MARAD’s PRDA process, and officials at one firm said that they had not responded to MARAD’s first PRDA solicitation because they did not believe their company was eligible to participate since it had no new technologies or innovations to offer. However, once they learned that a competitor was obtaining ship scrapping contracts through the PRDA process, they too submitted proposals under PRDAs and were subsequently awarded contracts for ship disposal. MARAD officials told us that the agency has seen several innovations from domestic and foreign contractors as a result of the PRDA process. Specifically, they pointed to the application of technological advances such as hydraulic, articulated shears and high pressure, water-cutting technology that have distinct advantages over the traditional use of cutting torches, as well as the use of the dry basin for dismantling of vessels, and improved business processes in the contract performance. However, we did not see any evidence that the use of these technologies was required by the terms of the contracts. More importantly, the use of these technologies does not make the contracts research or development contracts. While MARAD officials claimed that more firms had responded to PRDA solicitations than to other solicitation methods, we found that MARAD did not consider several other factors that could have affected participation. MARAD officials said that between 2001 and 2004, the agency received proposals from 71 firms (57 from October 2001 to March 2003 and 14 from January 2004) through PRDA solicitations, compared with 13 firms (8 in 2001 and 5 in 2003) through invitations for bid and requests for proposals (see table 7). However, the higher responses may have been due to the number of ships offered and their condition. For example, while the PRDA solicitations included around 100 ships and were open to domestic and foreign firms, solicitations by other methods were limited to a few high- priority ships and were open only to domestic firms because of concerns about long tows to the scrapping sites. Other factors, such as the ship disposal program’s larger appropriations in 2003 and 2004 and a rise in the price of scrap metal (making scrapping more profitable for the industry), could also have affected participation. In addition, there is no evidence to suggest the response that other procurement methods might have received would have been any different, since most ships were only offered under PRDAs. Moreover, several firms we contacted said that PRDAs had not positively influenced their decision to make offers to scrap ships over other procurement methods. In addition to being inconsistent with CICA and the FAR, MARAD’s use of PRDAs has provided less transparency than other available solicitation methods. For example, under an invitation for bids, bids are publicly opened and typically are available for examination by competing vendors. Under a request for proposals, competing vendors are given written notice of the reason they are excluded from the competition and are given an opportunity for a debriefing explaining the agency’s award decision. By contrast, proposals submitted under PRDAs are often difficult to compare with other proposals because they do not generally cover a specified statement of work or a set number of ships. Consequently, firms are often unable to determine why their proposal was not selected over another proposal. Moreover, MARAD officials were not able to tell us what criteria they used to award the six contracts we reviewed. For example, when they conditionally accepted proposals from two different firms, they could not explain what criteria they used to ultimately award a contract to one firm rather than the other. The domestic ship scrapping contractors with whom we spoke agreed that there was no way to determine why a particular company got a contract. This lack of transparency confuses and can alienate ship scrapping contractors. In discussing their experience with MARAD’s PRDA process, several ship scrapping contractors told us that although they had received contracts, they perceived inconsistencies in MARAD’s use of PRDAs. For example, they said that often a long time passes between the acceptance of a proposal by MARAD and the award of a contract. Industry officials said that although MARAD had informed them that their firm’s proposal was acceptable, MARAD took no further action on their proposal for more than a year. During this time, MARAD conducted negotiations with other firms over disposal issues, including the number of ships, the specific ships, and the cost. Some firms questioned MARAD’s ability to assess the best proposal if it is negotiating only with selected firms. Delays in awarding contracts affect the ship scrapping contractors because of the volatile nature of the scrap metal market. At the beginning of the ship scrapping program in fiscal year 2001, MARAD had planned to use another procurement method—contracting by negotiation—in which an agency issues a request for proposals. MARAD expected to use requests for proposals to dispose of vessels that were in the worst condition and designated as high priority. It planned to use this method to award multiple contracts to various ship scrapping companies in different locations, using long-term, indefinite delivery, indefinite quantity (IDIQ) contracts. These contracts were to specify a minimum initial quantity of high-priority ships to be scrapped but could subsequently be used to award additional ships to the same firms within a certain time period. MARAD expected to award contracts to a minimum of three companies to scrap at least one ship, and these companies would have the opportunity to scrap additional ships. However, MARAD officials said that when the agency did not receive any fiscal year 2002 appropriations for the ship disposal program, it shelved these plans. Instead, it turned to PRDAs for almost all of its contracts. While MARAD stated that other acquisition methods, such as requests for proposals and invitations for bids, do not allow offerors to submit solutions outside of the defined government requirements as could be done with PRDAs, MARAD did not provide a convincing case as to why these solutions were necessary to dispose of the ships that were in the worst condition as quickly as possible. In fact, MARAD stated in its briefing documents, PRDA solicitations, and reports to Congress, that it anticipated that it would use methods other than PRDAs to address the worst ships. While MARAD has also said that requests for proposals and invitations for bids do not accommodate numerous proposals of varied solutions based on each offeror’s business model, we found that they can provide some flexibility for both the government and the offeror. For example, in a 2003 invitation for bids for four ships, MARAD stated that it could award each ship separately or make multiple awards for any combination of ships. As noted earlier, MARAD could also use a request for proposals to select from a pool of qualified firms to propose on an indefinite quantity of ships as funding became available. Moreover, other federal agencies have used IDIQ contracts as a flexible procurement tool when funding is uncertain. For example, Navy officials told us that they used this method to gain flexibility in awarding contracts for the Navy’s ship disposal program when unanticipated end-of-year money became available. Since fiscal year 1999, the Navy has used requests for proposals to award IDIQ contracts to scrap 36 ships. While we could not isolate the specific impact of foreign competition and other factors on reducing the cost of ship disposal, MARAD attributes the overall decrease in ship disposal costs almost exclusively to overseas competition. However, other factors, such as larger annual program funding allowing for more ships per contract and increases in the scrap value of steel, may have also played a role. As table 8 indicates, the price of contracts that MARAD has awarded since 2001 has generally decreased. These decreases included an instance where solicitations were geographically restricted to domestic firms only—because of the deteriorated condition of some ships. In 2001, MARAD paid contractors an average of about $250 per ton to scrap ships. This average price fell to about $109 per ton in 2004—a decrease of about 56 percent. In its 2004 report to Congress, MARAD attributed the drop in contract prices to increased competition due to the inclusion of foreign firms since it had received more proposals when international firms have been included. However, while the overall price dropped from 2001 for two solicitations that included both domestic and international companies, it also decreased for one of the domestic solicitations that excluded foreign competition. According to industry officials, however, other factors, such as the condition of the ship, the amount of remediation that is required, and the potential recovery from recyclable material, can affect bid prices; thus, average prices based on tonnage alone may not be very meaningful if the amount of hazardous materials present on the ships varies. An increase in appropriations for the ship disposal program since 2003 may also have contributed to lower contract prices by allowing larger contracts that can benefit from a greater economy of scale. As table 3 shows, the program received $10 million in fund transfers from the Navy in fiscal year 2001 and no appropriated funding in fiscal year 2002. However, since fiscal year 2003, MARAD has received an annual average of about $23 million. This level of funding has likely attracted new interest among firms. According to MARAD officials, they have received proposals from companies that had not participated previously and have awarded contracts to some firms for the first time in 2004. Several industry representatives told us that MARAD’s higher funding levels allowed them to offer proposals that represented greater economies of scale, thus lowering costs. For example, one firm we visited was awarded a contract in 2003 to dispose of five ships, and three other firms received contracts to dispose of three ships each in 2004. By contrast, only one firm received a contract to dispose of two ships in 2001. A rise in scrap metal prices since 2003 has also contributed to lower bid prices. Representatives from firms we contacted said that higher scrap metal prices contributed to a large degree to their ability to offer lower prices for scrapping ships in recent proposals because they could expect to recover more of their costs from recycling the metal. Figure 5 shows the increase in average yearly international scrap steel prices since about 2002. While several firms’ representatives said that the participation of foreign companies did not have an affect on their offers, in at least one case domestic prices seem to have been influenced by foreign competition. According to MARAD officials, one domestic firm reduced its previous proposal by about 50 percent when it learned that MARAD was in the latter stages of contract negotiations to export several ships to a foreign firm. The domestic company made its new offer about 6 months after its original offer, and it included many of the ships that were under negotiations with the foreign firm. MARAD officials told us that subsequent offers from this and other domestic firms have been lower since MARAD awarded the contract for foreign export. Although MARAD’s ship disposal program has made some strides in reducing its inventory of obsolete and deteriorating merchant ships in the National Defense Reserve Fleet, it has managed to dispose of only 12 percent of its original 2001 inventory, and it is unlikely to meet the already extended deadline of September 30, 2006, to get rid of the entire inventory. The strides that MARAD has made have been facilitated by congressional appropriations of almost $80 million—almost $25 million higher than requested—to procure ship scrapping services during fiscal years 2001 through 2004 and by congressional support that helped streamline the ship donation program and encouraged the development of environmental standards for the artificial reefing program. However, these steps have not been enough to better ensure long-term program success. MARAD’s program currently does not have an overall, comprehensive management approach that focuses specifically on the ship disposal program and on meeting the statutory deadline of 2006. In the absence of a comprehensive management approach, MARAD’s ship disposal program lacks the vision needed to sustain a long-term effort. In addition, the program has not been able—and will likely continue to be unable—to obtain, on a consistent and predictable basis, the funding resources that it needs to efficiently and expeditiously reduce its obsolete ship inventory. Moreover, MARAD has not undertaken an overall assessment of its obsolete ship inventory, which is needed to determine what disposal methods (e.g., domestic or foreign scrapping, artificial reefing, deep-water sinking, or donations to organizations) are the most appropriate one for each vessel. It has also failed to set reasonable milestones for completing disposal and has not established relevant performance measures to periodically measure progress toward meeting the deadline. Similarly, MARAD has not established a formal decision- making framework that would clearly delineate roles and responsibilities and formalize program guidance and procedures. Further, MARAD has not established a process to systematically identify and assess the risk that external factors pose to the program, nor has it laid out plans that would prioritize its actions to mitigate these risks. In addition, MARAD has not submitted to Congress on a timely basis the semiannual progress reports that the 2001 statute requires. Finally, MARAD has predominately used a procurement method that is not appropriate for acquiring ship scrapping services and that has led to concerns about the lack of transparency in the way that ship scrapping contracts have been awarded. As a result of these many weaknesses, MARAD has not been able to ensure Congress that it can dispose of the obsolete ships in a timely and cost-effective manner. Without an improved management approach, MARAD’s ship disposal program will be limited in its ability to dispose of—in a timely manner—the more than 100 obsolete ships currently in its inventory, as well as the additional ships that the program expects to receive each year. As a result of its slow progress, MARAD will continue to have a backlog of obsolete and deteriorating ships that pose a threat to the coastal waterways where they are anchored because of the toxic materials that they contain. If this hazardous material should spill out, as it has already in a number of cases, the ships could cause a costly environmental disaster in some of the nation’s sensitive waterways. We recommend that the Secretary of Transportation direct the MARAD Administrator to take the following three actions. Develop a comprehensive approach to manage MARAD’s ship disposal identify a strategy and an implementation plan to dispose of all existing obsolete ships and future transfers in a timely manner, maximizing the use of all available disposal methods; determine the needed resources, the associated funding plan, and specific milestones for this disposal; establish a framework for decision making that would delineate roles and responsibilities and establish guidance and procedures; identify external factors that could impede program success and develop plans to mitigate them; and annually evaluate results and implement corrective actions. Regularly communicate MARAD’s plan, required resources, and any impediments that require congressional assistance in the mandated reports to Congress. We also recommend that MARAD change its contracting approach for acquiring ship scrapping services from the use of PRDAs to an appropriate method. In commenting on a draft of this report, DOT did not directly state whether it agreed with our recommendations but noted that MARAD is taking some actions that may address them. DOT’s comments stated that MARAD will provide an updated comprehensive, integrated approach to program management in subsequent reports to Congress and MARAD has terminated its usage of PRDAs. DOT’s comments are included in appendix II of this report. DOT stated that despite the complex challenges that MARAD’s ship disposal program faces, through its efforts and actions, much has been achieved including the disposal of 26 vessels. DOT also commented that MARAD built a comprehensive disposal plan when the program was authorized and provided that plan to Congress in 2001, and also acknowledged that MARAD agrees that the time is right to ensure that its planning efforts are up to date and are appropriately comprehensive. However, MARAD does not believe that effective planning will change the fundamental external legal, environmental, and regulatory challenges that limit the number of ships that can be processed and the speed at which the program can proceed. We recognize that the ship disposal program faces a number of complex challenges and that MARAD has taken a number of actions to address them. However, we do not believe that these actions have been taken in an integrated manner. That is why we continue to believe that a comprehensive management approach could better focus program efforts and lead to better program results. Specifically, we believe that the program could benefit from clearly stated goals, planned approaches consistent with these goals and with timetables, resource identification that could support these approaches in identifying, appropriate performance measures, and a process to systemically identify and assess the program’s external factors and determine the related mitigation actions that could improve MARAD’s chances of meeting its program goals. While MARAD developed a plan that contained some of these elements in 2001, the plan was not followed or revised. We believe that MARAD’s acknowledgment that it needs to ensure that its planning efforts are up to date and comprehensive is a good first step. However, MARAD’s comments do not provide enough detail for us to determine if these actions are sufficient. DOT commented that MARAD has revised its contracting approach, which resulted in the termination of its use of PRDAs. It stated that MARAD had consistently provided fair treatment to contractors and that contract awards were made on the basis of best value to the government. We were not provided the details of MARAD’s revised contracting approach and thus cannot comment on it. DOT also provided technical comments, which were incorporated as appropriate. We are sending copies of this report to Senators George Allen and John Warner; Representative Jo Ann Davis; the Secretary of Transportation; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-8365 or Solisw@gao.gov or Dave Schmitt, Assistant Director, at (757) 552-8124 or Schmittd@gao.gov. Other major contributors to this report were Rodell Anderson, Harry Jobes, Vijay Barnabas, Kenneth Patton, and Nancy Benco. To assess whether the Maritime Administration (MARAD) is likely to meet the statutory deadline of September 2006 and, if not, what factors may prevent it from doing so, we reviewed MARAD’s 2001 report to Congress, in which it presented its plan to meet the deadline, and its subsequent 2002 and 2004 status reports to Congress. We also reviewed the Department of Transportation’s (DOT) and MARAD’s strategic and performance plans and applicable laws and regulations pertaining to the ship disposal program. To assess the adequacy of these plans and reports for managing the ship disposal program, we compared the elements used in MARAD’s ship disposal management approach with those developed from sound management principles as embodied in the Government Performance and Results Act (GPRA) of 1993 and further refined in GPRA user guides, our guide for leading practices in capital decision-making, and our prior reports. To measure progress that MARAD was making toward the 2006 deadline, we determined the number of awarded contracts, ships removed from the storage sites, and the number of ships disposed of by reviewing program documents. We also obtained data to reflect impediments that were impacting the program. During our review, MARAD officials in Washington provided us with briefings on the program’s funding, fair market value of obsolete ships, domestic and foreign ship scrapping capacity, and ship scrapping performance bond cost determination. We reviewed and analyzed National Defense Reserve Fleet inventory reports for fiscal years 2000 to 2004 to determine the number of ships entering and leaving the inventory. We also reviewed reports that categorized the condition of ships in the inventory and assessed whether ships in the worse condition were given the highest priority for disposal. We also reviewed and analyzed funding data for the MARAD ship disposal program for fiscal years 2001 through 2004 to identify funding trends and examined a number of publications that had focused on ship disposal issues. In addition to talking with MARAD officials in Washington, D.C., we met with MARAD representatives and conducted on site visits at the James River Reserve Fleet near Fort Eustis, Virginia, and the Suisun Bay Reserve Fleet, Benicia, California. During these visits, we talked with officials about their methodology for determining ship condition, discussed past instances of oil spills, and observed the condition of the ships by touring selected obsolete ships. We selected these two sites because they had the largest number of ships and included the ones considered to be of highest priority for disposal. To gain the perspective of domestic scrapping companies, we conducted on-site visits at four domestic firms that had submitted proposals for scrapping ships and had been awarded scrapping contracts. These companies were Bay Bridge Enterprises LLC, Chesapeake, Virginia; and International Shipbreaking Limited, LLC, Marine Metals, Inc., and ESCO Marine, Inc., all of Brownsville, Texas. At each location, we met with company managers to obtain their views on MARAD’s ship disposal program and also toured their facilities. We also interviewed officials at a fifth firm—All Star Metals, Brownsville—that we had identified as having the potential capacity to scrap MARAD ships. To determine to what extent MARAD has used alternative disposal approaches, other than ship scrapping, to dispose of its inventory of obsolete ships, we interviewed officials in MARAD’s Office of Ship Operations and Ship Disposal Program Office and obtained and reviewed MARAD’s 2001 plan for ship disposal and MARAD’s 2002 and 2004 reports to Congress on plan implementation. We reviewed a list of alternative approaches considered by MARAD and documented the priority that MARAD placed on each alternative and the trade-offs associated with each alternative in terms of costs, time, and barriers to implementation. In addition, we interviewed officials in the U.S. Naval Sea Systems Command program office responsible for managing the Navy’s ship disposal program in Washington, D.C., to discuss their program. To assess the appropriateness of MARAD’s procurement methods for contracting for the disposal of surplus ships, we interviewed responsible MARAD headquarters officials in their ship disposal program office and their acquisition office, listened to briefings, and reviewed documents related to the acquisition process. We also compared MARAD’s acquisition methods for ship disposal services with those used by the Navy. We submitted a series of written questions to MARAD to obtain the agency’s legal position on the appropriateness of using Program Research and Development Announcements (PRDA) as an acquisition method for ship disposal. We reviewed MARAD’s responses and reviewed the criteria in the Competition in Contracting Act of 1984 and the Federal Acquisition Regulation. We also examined MARAD’s contract files containing recent ship disposal industry proposals received in response to PRDAs and reviewed the criteria and process that MARAD used to evaluate industry proposals. We also reviewed the first six contracts that MARAD awarded under PRDA to determine if they were consistent with the Federal Acquisition Regulation. These contracts were awarded from August 2002 to February 2004. To assess the impact of foreign competition on reducing the cost of ship disposal, we compared bid prices for solicitations that were restricted to domestic firms only versus those that included domestic and foreign firms. We also interviewed officials at MARAD and industry representatives at the five domestic ship scrapping firms mentioned previously to obtain their perspectives on factors contributing to lower ship scrapping costs. In addition, we visited the Institute of Scrap Recycling Industries, Washington, D.C., and obtained historical data on world prices for recycled steel. We determined that the data used in this report were sufficiently reliable for the purposes of this report. We performed our audit from November 2003 through November 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Transportation’s letter dated February 15, 2005. 1. We do not agree that MARAD has made substantial progress since 2001, considering less than 12 percent of the ships that have been in its inventory had been disposed of through September 2004—4 years later. While MARAD has made more progress in disposing of ships in the past 2 years, much of this progress can be attributed to Congress providing about $25 million more than was sought in the fiscal years 2003 and 2004 budget requests and the rising scrap metal market that has kept contract prices low. We also do not agree that the number of ships awarded is a good measurement of program progress since contract award has not always led to ships being removed and disposed of. 2. MARAD has no way of determining how successful exporting ships could be in reducing the number of its ships because it has not systematically identified and assessed potential impediments to export and determined actions to mitigate them. 3. As our report points out, while DOT states that domestic capacity is limited, our report notes that MARAD has not done an analysis to determine the potential domestic capacity nor thought such an analysis was necessary. The recent increase in the number of domestic firms being awarded contracts would indicate that MARAD underestimated domestic capacity in the past. 4. We believe that MARAD lacks an integrated strategy for disposing of all of its ships using its available disposal methods. Lacking such a strategy contributed to MARAD’s decision to award an export contract that included 10 of its worst-conditioned ships before the advisability and feasibility of exporting was demonstrated. As a result, almost half of the $31 million that Congress appropriated in fiscal year 2003 has been tied up, pending the resolution of issues related to the exporting of these ships, and the disposal of these 10 deteriorated ships has been delayed. 5. We agree that plans must be flexible. However, successful programs must have a road map to guide its efforts. While DOT states that its strategy emphasizes disposing of ships in the worst condition first, neither its reports to Congress nor its strategic planning documents have identified the amount of funding needed to accomplish this strategy based on its available disposable methods nor given a timetable for accomplishing these disposal actions. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | The Maritime Administration (MARAD) has more than 100 obsolete and deteriorating ships awaiting disposal that pose potentially costly environmental threats to the waterways near where they are stored. Congress, in 2000, mandated that MARAD dispose of them by September 30, 2006. While MARAD has various disposal options available, each option is complicated by legal, financial, and regulatory factors. In this report, GAO assesses (1) whether MARAD will meet the September 2006 disposal deadline for these ships and, if not, why not; (2) the extent that MARAD has used alternative disposal methods other than scrapping, and barriers to using other methods; (3) the appropriateness of MARAD's methods for procuring ship disposal services; and (4) the impact of foreign competition and other factors on reducing disposal costs. MARAD is unlikely to meet its statutory deadline of September 30, 2006. As of September 2004, MARAD had disposed of 18 ships from its inventory, with over 100 ships left to dispose of by the deadline. MARAD's current approach is not sufficient for disposing of these remaining ships within the next 2 years. MARAD's slow progress is due primarily to program leaders not developing a comprehensive management approach that could address the myriad of environmental, legal, and regulatory challenges that the program faces. MARAD's approach lacks an integrated strategy with goals, milestones, performance measures, and a mitigation plan for overcoming anticipated impediments. In the absence of this comprehensive approach, MARAD's ship disposal program lacks the vision needed to sustain a long-term effort. Consequently, MARAD has not been able to assure Congress that it can dispose of these ships in a timely manner to reduce the threat of a costly environmental event, nor has it clearly articulated what additional congressional assistance, such as funding, may be needed. While MARAD has considered alternative disposal methods to scrapping, it has made limited use of these methods because of a number of environmental, financial, and legislative barriers. Since fiscal year 2001, MARAD has disposed of 17 ships through scrapping, but only 1 through artificial reefing. MARAD has not disposed of ships using deep-water sinking and donations to historic organizations. MARAD has taken positive steps to reduce barriers limiting its use of these methods but still may be years away from increasing the number of disposals using these alternative methods because it has not developed an overall plan for expanding their use. Consequently, MARAD may be losing opportunities that could expedite the disposal of the obsolete ships in its inventory. Since fiscal year 2002, MARAD has relied almost entirely on an inappropriate procurement method--Program Research and Development Announcements (PRDA)--to acquire ship scrapping services. The Federal Acquisition Regulation and the Competition in Contracting Act of 1984 generally require that MARAD use other methods for acquiring these types of services. PRDAs may only be used to contract for research or development. According to MARAD, PRDAs provide greater flexibility and allow firms to propose innovative solutions to ship disposal. GAO found, however, that MARAD was not contracting for research or development but instead was acquiring ship scrapping services. MARAD's use of PRDAs has also resulted in a lack of transparency in the contract award process and has raised concerns among firms as to the fairness of MARAD's processes. While GAO was unable to isolate the specific impact of foreign competition and other factors on reducing ship disposal costs, MARAD attributes the decrease in ship disposal prices almost exclusively to foreign competition. However, other factors, such as larger annual program funding and increases in the scrap value of steel, may have also played a role. |
The Veterans’ Health Care Eligibility Reform Act of 1996 required that VA establish an enrollment system to help manage its health care delivery system. VHA’s HEC is the business owner of the Enrollment System— the official system of record for verifying veterans’ eligibility for health care benefits and maintaining enrollment information. To enroll for health care benefits, veterans submit an application to either HEC or a VAMC. Application information includes demographic, military service, financial, and insurance information. A veteran may apply online, by mail, by fax, by phone, or in person. Once a veteran submits an application, there are three key steps for processing the application: intake of application, verification of eligibility, and enrollment determination. There may be an additional processing step—resolving a pending application—if enrollment staff need additional information to determine eligibility. According to VHA policy, staff are required to process applications within 5 business days of receipt. Figure 1 provides an overview of the enrollment process, as of June 2017. Intake of application. If a veteran applies in person, or faxes or mails an application to a VAMC, local enrollment staff enter the application information into the VAMC’s Veterans Health Information Systems and Technology Architecture (VistA) system. If a veteran applies in person, by phone, or faxes or mails an application to HEC, HEC enrollment staff enter application information into the Enrollment System. A veteran may also apply online, and the application information is directly transmitted into the Enrollment System. Historically, VAMCs have received more than 90 percent of enrollment applications for processing. Verification of eligibility. After the intake of an application, VAMC and HEC staff attempt to verify whether veterans meet eligibility requirements based on their military service and, if applicable, financial information. VAMC staff attempt to verify military service by reviewing supporting documentation provided by veterans (e.g., military discharge or separation papers). If veterans do not provide any documentation of military service, staff will try to verify this information through military service databases. For application information that HEC staff have entered into the Enrollment System or for those applications that have been submitted online with information transmitted directly to the Enrollment System, the system assesses eligibility based on the information entered. Both VistA and the Enrollment System automatically assess whether a veteran’s self-reported income meets VHA income thresholds for eligibility, as applicable. VAMC staff are required to ensure that application information is accurately entered into VistA. Application information that is entered into local VistA systems is transmitted nightly into the Enrollment System. Enrollment determination. The Enrollment System makes all enrollment determinations, including those for applications processed at VAMCs. Specifically, the Enrollment System determines whether veterans are enrolled, rejected, or ineligible for health care benefits. Veterans who are enrolled or rejected are placed in a category based on type of eligibility—called a priority group—established to manage the provision of care. For example, priority group 1 consists of veterans who are rated 50 percent or greater based on service-connected disabilities. Priority group 5 consists of veterans who are eligible because their incomes are at or below VHA’s eligibility thresholds; and priority group 8g consists of non-service connected veterans whose incomes are above the thresholds, and thus are rejected for health care benefits. HEC sends a letter and a personalized handbook to each veteran once it has made an enrollment determination with the decision and a description of benefits, if applicable. Resolving pending applications. If VAMC or HEC enrollment staff cannot verify veterans’ eligibility for making an enrollment determination, the application is categorized as pending. To resolve a pending application, VAMC or HEC staff are to contact the veteran to obtain the missing information (e.g., military service or financial information). VAMC and HEC staff share responsibility for resolving pending applications. For instance, HEC staff may send a pending application for VAMC staff to help process. VAMC staff may also contact HEC staff for assistance in collecting missing information because, for example, according to officials, HEC has greater access to military service databases. VHA enrollment staff, both from HEC and VAMCs, frequently did not process enrollment applications in accordance with VHA’s timeliness standards and made incorrect enrollment determinations. VHA, through HEC, is assessing efforts to improve its enrollment processes. Prior studies show that VHA enrollment staff, whether from HEC or VAMCs, frequently did not process enrollment applications within 5 business days in accordance with VHA timeliness standards. Specifically, a June 2016 VHA audit found that HEC staff did not process 143 of 253 applications reviewed (57 percent) within VHA’s timeliness standard. The audit found that this occurred, in part, because HEC enrollment staff were not prioritizing workload to focus on processing applications that were approaching the timeliness standard. In response to an audit recommendation, HEC officials said they have begun prioritizing workload to help meet the timeliness standard. A VHA Chief Business Office analysis showed that VAMCs also did not consistently process online applications within 5 business days. According to the analysis, only 35 percent of online applications were processed by VAMCs within 5 business days in fiscal year 2012 and 65 percent through the first 7 months of fiscal year 2016. VAMC officials we contacted said because there is no mechanism for veterans to provide supporting military service records, such as discharge papers, with their online applications, VAMC staff need to obtain the information by querying available military service databases or following up with the veterans, which may cause delays in processing. Several VAMC officials said that HEC should implement automated controls that do not allow veterans to submit online applications without attaching supporting documents that include information needed for making enrollment determinations. Additionally, the overall time needed to process enrollment applications may increase when staff need to place applications in a pending status. In its September 2015 report, VA’s Office of Inspector General found that, as of September 2014, the Enrollment System contained nearly 870,000 pending applications, many of which had been pending for more than 5 years. According to the report, 72 percent of those applications were pending because additional financial information was needed from veterans. In response to the report, in 2016, HEC and VAMCs undertook outreach efforts, such as attempting to contact all veterans with pending applications via phone and letters. According to HEC officials, as of May 2017, they were able to resolve about 30 percent of the applications (about 255,000 applications). This included enrolling approximately 88,000 veterans, as well as removing from pending status applications for which HEC officials said veterans were no longer living. HEC officials and VAMC staff in our review said they experienced problems resolving pending applications because they were generally several years old and lacked accurate contact information. HEC officials stated they would continue to work on resolving them, but if staff cannot obtain the information within 365 days, the applications’ status will change from pending to closed at that time. Based on our discussions with enrollment staff, we found that none of the VAMCs in our review had a specific policy or procedure for how to resolve pending applications. Officials indicated that they had not received any national procedure or guidance from VHA, nor had they developed local procedures. According to federal internal control standards, management should design control activities, such as policies and procedures, to achieve objectives and respond to risk. In the absence of a standard procedure for VAMCs to use to resolve pending applications, veterans are at risk for experiencing unnecessary delays while waiting for their applications to be processed. For the six VAMCs in our review, we found that, as of March and April 2017, VAMC enrollment staff had not resolved 31 (55 percent) of the 56 pending applications included in our random, nongeneralizable sample of pending applications. (See table 1.) Specifically we found that for 22 (71 percent) of the 31 unresolved applications there was no evidence that VAMC enrollment staff had attempted to contact the veterans to obtain missing military service or financial information, and that 18 of these 22 applications had been in a pending status for 3 months or longer at the time of our review. VAMC officials told us they were not aware that some of the unresolved were in a pending status prior to our review. For the remaining 9 applications, we found VAMC enrollment staff attempted to contact the veterans, but were unable to resolve the application, for example, due to the lack of response from the veteran or lack of valid contact information. These 9 applications had been in a pending status between 2 and 5 months at the time of our review. For the 25 applications that enrollment staff resolved, we found that staff enrolled the veterans for 19, and for the other 6, staff determined the veterans were ineligible or rejected for enrollment. We also found the time it took staff to make an enrollment determination varied widely—ranging from 3 to 119 days. (See table 2.) Officials from five of the six VAMCs told us that based on our review they recognized the need to improve their processes. For example, officials from two VAMCs indicated that they were going to develop a standard operating procedure for identifying pending applications, following up with veterans to obtain missing information, and documenting actions such as the dates that enrollment staff called veterans or mailed letters to resolve outstanding issues. VHA’s Compliance and Internal Control Program Office conducted two audits (in April and August 2016), which found that VHA enrollment staff, including those from HEC and VAMCs, frequently made incorrect enrollment determinations. In some cases, veterans were rejected for health care benefits when those veterans should have been enrolled, and in other cases veterans were enrolled when they were ineligible for benefits, according to these audits. Specifically, VHA’s audits found the following: HEC had a 12 percent error rate. The April 2016 audit found that HEC enrollment staff made incorrect determinations for 31 of 253 randomly selected applications. The audit found that these errors included a combination of incorrect enrollment and rejection determinations, and the most frequent errors—in 15 of the 31 cases—related to enrollment staff enrolling or rejecting veterans for health care benefits without sufficient documentation, such as proof of military service. Audit findings indicated these applications should have been assigned a pending status. VAMCs had a 27 percent error rate. The August 2016 audit found that VAMC enrollment staff made incorrect determinations for 101 of 381 randomly selected applications. Similar to the audit of HEC, the audit of VAMCs found that errors included a combination of incorrect enrollment and rejection determinations. For example, the audit of VAMCs identified 15 applications for which enrollment staff incorrectly rejected the veterans for health care benefits. According to the audit, VAMC staff should have either enrolled the veterans because they had provided adequate documentation needed to verify their eligibility, or categorized the applications as pending until adequate documentation was obtained such as proof of military service needed to verify eligibility. In addition to the two audits, VHA’s Compliance and Internal Control Program Office conducted an informal review that found for a sample of 357 phone applications, enrollment staff made incorrect enrollment determinations for 87 (24 percent ). The most frequent errors again related to staff enrolling or rejecting veterans for health care benefits without sufficient documentation, such as proof of military service. In these instances, the applications should have been assigned a pending status, according to the review. Although documentation on the audits and the informal review did not provide information on specific causes of the errors, officials responsible for conducting the audits indicated that the incorrect enrollment determinations were the result of human error. Through its HEC, VHA is assessing efforts to improve the timeliness of enrollment application processing and the accuracy of enrollment determinations. Specifically, HEC officials established the National Enrollment Improvement, an initiative which includes two efforts to centralize or standardize key aspects of enrollment processes. One effort involves VAMCs’ processing of applications using the Enrollment System rather than VistA. To examine potential options, HEC implemented two pilots in 2016: Pilot 1— implemented May through August 2016, required enrollment staff at three VAMCs to process all applications by entering information directly into the Enrollment System. VAMC enrollment staff participating in this pilot told us they encountered challenges, including not being able to log into the Enrollment System, and frequently had to revert to processing many applications in VistA. In total, the VAMCs processed 239 applications using the Enrollment System, which did not provide HEC sufficient data for determining the pilot’s effectiveness, according to the officials responsible for implementing the pilot. Pilot 2—a case study implemented over 2 weeks in December 2016, required enrollment staff at six VAMCs to enter application information for veterans applying in person into an online application for direct transmittal to the Enrollment System. According to officials, a goal of the pilot was to test the automatic verification of military service information against databases to reduce human intervention in verifying eligibility, thereby improving the timeliness and accuracy of enrollment determinations. Similar to the first pilot, technology issues precluded effective processing. For example, automated verification was not consistently successful, and most applications processed (65 of 86) required manual intervention to reach an enrollment determination. In addition, officials said the online application did not always capture information needed to make an enrollment determination. HEC officials told us they did not obtain sufficient information from the pilots to make a decision on which option would replace VAMCs use of VistA for processing applications. As such, HEC officials told us they are planning to conduct a third pilot to further test the option of having VAMCs enter application information directly into the Enrollment System. Officials said they do not have a definitive implementation plan or timeline for conducting this pilot. A second effort under the National Enrollment Improvement involves standardizing the process of resolving pending applications. HEC developed procedures for HEC enrollment staff to use when resolving pending applications. Specifically, when a veteran’s application is placed in a pending status, staff are to send the veteran a letter that includes information about why the application is pending; the information HEC needs to make an enrollment determination; and instructions for providing the information to HEC. Additionally, staff are instructed to make phone calls at pre-determined time intervals—8 days, 30 days, 90 days, 180 days, and 310 days after an application becomes pending—in an attempt to contact the veteran to obtain missing information. HEC enrollment staff are also required to document each phone call attempt and the results. If staff are able to obtain the information within 1 year of informing a veteran about an application’s pending status, that information is documented, and staff make an enrollment determination. If, after 365 days, staff cannot obtain the information needed to make an enrollment determination, the application status would be changed from pending to closed in the Enrollment System. Although HEC has developed standardized procedures for the resolution of pending applications by HEC staff, it has not communicated these procedures to VAMC enrollment staff. Officials from the six VAMCs in our review indicated they were not aware of HEC’s plans to standardize this process, nor had they been asked to provide input or feedback on some of the challenges they have experienced. Furthermore, VAMC officials told us that they had not received any guidance regarding the new procedures and were confused about whether they would continue to have a role in this process. HEC’s new procedures do not specify whether VAMCs have a continued role in resolving pending applications and whether the procedures apply to VAMCs, although HEC officials told us that VAMCs would continue to be involved. According to federal internal control standards for information and communication, management should internally communicate the necessary information to achieve the agency’s objectives. Communicating quality information down and across reporting lines enables personnel to perform key roles in achieving objectives, addressing risks, and supporting the internal control system. In the absence of HEC coordination and communication with VAMCs about its effort to standardize the process for resolving pending applications, including the role VAMCs will have, there may be duplication of efforts between HEC and VAMC enrollment staff, which could lead to inefficiencies. VHA lacks a standardized process for system-wide oversight of enrollment processes to ensure applications are processed in a timely manner and enrollment determinations are accurate. Further, VHA, through HEC, lacks reliable data to oversee timely processing of applications across VAMCs. HEC has recently implemented an effort to review the accuracy of some enrollment determinations, specifically those for which veterans were found to be ineligible or rejected for health care benefits. VHA has not sufficiently defined through policies or procedures a standardized oversight process that describes and delineates the roles and responsibilities of HEC and VISNs—the networks that manage and oversee VAMCs in their geographic area—in monitoring and evaluating the efficiency and effectiveness of enrollment processes. Although HEC officials said they are responsible for oversight of enrollment processes system-wide and VHA policy generally states that HEC is responsible for performing a second-level review of all enrollment determinations, policies and procedures do not document the key oversight activities that should be conducted, how often they should be done, or the data that should be assessed for ensuring timely and accurate enrollment processes system-wide. Additionally, although HEC officials said VISNs should be overseeing VAMCs’ enrollment processes within their networks, we found that VHA’s policies do not describe these oversight role and responsibilities. Officials from the six VISNs in our review reported different perspectives about their role in overseeing enrollment processes, and as a result, oversight activities were limited and varied across these VISNs. For example, officials from two of the VISNs in our review considered VISNs to have no role in the oversight of enrollment processes, and primarily provided information from HEC to the VAMCs within their networks. In contrast, an official from another VISN did consider VISNs to have an oversight responsibility, and that VISN is planning to develop a standard set of report requirements for VAMCs within the network to use so that the VISN would have consistent information to use for monitoring VAMCs’ enrollment processes. According to federal internal control standards for a control environment, an agency should establish an organizational structure and assign responsibility for achieving its objectives. An oversight structure would help fulfill responsibilities set forth by applicable laws and regulations, and relevant government guidance. Without defining a standardized process for oversight, HEC—VHA’s entity responsible for enrollment—may be unable to determine what oversight, if any, is being conducted system- wide and may not have key information about deficiencies in processing enrollment applications. Thus, HEC is limited in its ability to effectively develop systematic solutions and ensure enrollment processes are efficient and resulting in accurate enrollment determinations. HEC officials said they recognized the need to improve the oversight of enrollment processes, and a goal under the National Enrollment Improvement is for HEC to have 100 percent accountability and oversight of applications— those processed both at HEC and at the VAMCs. VHA—through HEC—does not have complete and reliable data for overseeing the timeliness of processing enrollment applications system- wide. HEC has data about processing timeliness for the applications that it receives. HEC officials said it lacks similar data for those applications received by the VAMCs—which comprised about 90 percent of the applications received system-wide in fiscal year 2016. HEC officials said they are able to monitor processing timeliness for the applications they receive because enrollment staff log the dates of the applications received into a workload tool and track monthly the processing timeliness and application status, such as the percent of applications that remained pending. In contrast, applications received by VAMCs are entered into local VistA systems that do not capture information on the date the application was received, which precludes accurate measurement of the timeliness of application processing. Although VistA captures the date enrollment staff entered the application information into the system, this date may not yield an accurate start date to measure timeliness of processing, specifically for applications received by mail or fax, because there is no assurance the information was entered when the application was received, according to HEC officials and VAMC staff. For example, officials from one VAMC in our review said if mailed or faxed applications are missing military service or financial information, staff do not enter the information into VistA until all the required information is obtained. Absent the information in VistA needed to track and monitor their performance in processing enrollment applications, three of the six VAMCs in our review developed Excel spreadsheets to collect this information. These spreadsheets tracked the dates when applications were received, as well as the enrollment determination made for each. However, such Excel spreadsheets were developed and maintained solely at the discretion of individual VAMCs in our review. HEC and VAMCs in our review also have varying interpretations of how to measure whether VHA’s 5 business day timeliness standard has been met. VHA policy states the starting point for measuring adherence with its timeliness standard is the date the application was submitted online by the veteran, time-stamped when received by VAMCs or HEC, or the date the veteran came in person to apply. However, the policy does not define the end point for measuring the amount of time elapsed and does not specify whether the processing time includes the time applications are pending due to missing information. HEC officials told us the end point is the date of an enrollment determination, and measurement of timeliness should include any time the application was pending. Officials from four of the six VAMCs in our review, in contrast, said they considered the timeliness standard met when an application was entered into the system, irrespective of whether an enrollment determination was made or whether the application was pending. According to federal internal control standards for information and communication, management should use quality information to achieve the entity’s objectives. Management obtains relevant data from relevant internal and external sources in a timely manner based on identified information requirements. Relevant data collected have a logical connection with identified information requirements, and management evaluates the sources of data for reliability. Without a central repository of reliable data about enrollment processes and a clearly defined measurement of the processing standard, VHA cannot reliably and consistently oversee processing timeliness of enrollment applications, assess the extent to which VAMCs face challenges in meeting the standard, and make appropriate decisions to improve processes system- wide. HEC officials acknowledged their lack of adequate information to monitor timeliness of application processing system-wide and told us they plan to develop the capacity to collect this information. Under the National Enrollment Improvement, HEC officials identified several steps for collecting standardized and centralized data for conducting oversight, including (1) eliminating VAMCs use of VistA for processing enrollment applications and solely using the Enrollment System, which is able to capture application receipt dates, (2) developing standardized procedures for capturing in the Enrollment System the date a mailed application was received, and (3) developing a series of reporting metrics for assessing the timeliness of processing applications across different modes. However, HEC officials told us that they need VHA approval to implement these actions, and as of June 2017, they did not have a timeline for when these actions might be implemented to allow them to accurately track and report on processing timeliness across all modes and VAMCs. HEC has efforts underway and planned to oversee the accuracy of enrollment determinations made system-wide. First, VHA—through HEC—has recently implemented an effort to review the accuracy of enrollment determinations for which veterans were found to be ineligible or rejected for health care benefits. This effort, which began in March 2017, employs a dedicated team of HEC staff to centrally conduct these secondary reviews daily, according to HEC officials. Prior to this date, VHA instructed VAMCs to conduct monthly secondary reviews of the accuracy of these enrollment determinations, and report the results of these reviews to HEC. However, HEC officials said they conducted an internal review of a sample of applications that had undergone this secondary review, and found that 20 to 30 percent still were incorrectly determined to be ineligible or rejected. Although HEC officials said VAMCs should continue these secondary reviews, HEC will also conduct its own independent review of all ineligible or rejected enrollment determinations. HEC officials said they plan to use the results of the reviews for quality assurance and training purposes with HEC and VAMC enrollment staff. Additionally, HEC officials told us they plan to expand their reviews of the accuracy of enrollment determinations and are currently assessing how to conduct second level reviews effectively system-wide. Although VHA policy states that HEC is responsible for performing a second-level review of all enrollment determinations, HEC officials said they have not been fully adhering with this requirement because, primarily, they have been focused on resolving the backlog of pending applications. Timely and accurate processing of veterans’ enrollment applications is critical to ensuring that eligible veterans obtain needed health care. Without efficient and effective enrollment processes, veterans may be delayed in obtaining needed services or incorrectly denied benefits. VHA’s current enrollment processes are decentralized and fragmented, with enrollment processing spread across 170 individual VAMCs as well as VHA’s HEC. The current processes are also prone to delays and errors, such as enrollment staff frequently not meeting the 5-day timeliness standard and making incorrect enrollment determinations when processing veterans’ enrollment applications. In particular, in some instances veterans may not have provided all the information HEC or VAMC staff need to process an enrollment application and the application becomes pending. VAMCs, however, do not have effective processes for obtaining information needed to resolve pending applications, which has resulted in veterans experiencing unnecessary delays waiting for enrollment determinations. Although HEC has developed new procedures for its staff to resolve pending applications, the procedures do not delineate whether VAMCs have a continued role in this process or whether they should be following these new procedures. A system-wide standard procedure that clarifies the roles and responsibilities of VAMC enrollment staff in resolving pending applications may help improve efficiency and help ensure that veterans receive a timely response when applying for health care benefits. Additionally, limitations in VHA’s oversight further impede its ability to ensure the timeliness of application processing and the accuracy of enrollment determinations system-wide. VHA has not sufficiently defined roles and responsibilities for HEC and VISNs for conducting oversight of enrollment processing. Without establishing and clearly communicating the entity responsible for oversight and the activities that should be routinely conducted, there are no assurances that oversight is being conducted system-wide and deficiencies are being addressed appropriately. Oversight is further challenged by the lack of reliable and consistent data needed to evaluate timeliness of processing enrollment applications, and a clearly defined policy to measure processing timeliness. Due to this lack of data needed for system-wide oversight, VHA may be unable to determine if all veterans who submit an application to VAMCs—which handle a majority of the applications system-wide—are receiving timely enrollment determinations. HEC has efforts planned to improve its oversight; implementing and assessing these efforts may help ensure the timeliness and accuracy of enrollment processes, and help VHA make appropriate system-wide process improvements. We recommend that the Secretary of Veterans Affairs direct the Acting Under Secretary for Health to take the following four actions: 1. Develop and disseminate a system-wide standard operating procedure that clearly defines the roles and responsibilities of VAMCs in resolving pending enrollment applications. 2. Clearly define oversight roles and responsibilities for HEC, and for VISNs as appropriate, to help ensure timely processing of applications and accurate enrollment determinations. 3. Develop procedures for collecting consistent and reliable data system- wide to track and evaluate timeliness of enrollment processes, and institute an oversight mechanism to ensure VAMC and HEC enrollment staff are appropriately following the procedures. 4. Clarify its 5-day timeliness standard for processing enrollment applications, including whether it covers the total time needed to make an enrollment determination and the time applications are pending, and ensure the clarification is communicated system-wide. We provided VA with a draft of this report for its review and comment. VA provided written comments, which are reprinted in appendix I. In its written comments, VA concurred with all four of the report’s recommendations, and identified actions it is taking to implement them. In addition, VA provided technical comments which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Veterans Affairs, the Acting Under Secretary for Health, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Janina Austin, Assistant Director; David Lichtenfeld, Analyst-in-Charge; Joanna Wu Gerhardt; and Joy Kim made key contributions to this report. Also contributing were Jennie Apter, Muriel Brown, Jacquelyn Hamilton, and Richard Lipinski. | Enrollment is generally the first step veterans take to access VA health care, thus timely and accurate processing of enrollment applications is critical to help ensure eligible veterans obtain needed care. The Explanatory Statement accompanying the Consolidated Appropriations Act, 2016 included a provision for GAO to examine VA's oversight of patient access to care. This report examines (1) VHA's processes for enrolling veterans for health care benefits and (2) its related oversight. GAO reviewed federal laws, regulations, and VHA policies and procedures. GAO also interviewed officials from HEC and 6 of VHA's 170 VAMCs selected to provide variation in factors such as number of enrollment applications processed and geographic location; reviewed actions to resolve a randomly selected, nongeneralizable sample of pending enrollment applications from these 6 VAMCs; and interviewed HEC and VAMC officials on oversight of enrollment processes. The Department of Veterans Affairs' (VA) Veterans Health Administration's (VHA) implementation and oversight of enrollment processes need improvement to help ensure the timely enrollment of veterans for health care benefits. VHA frequently did not meet timeliness standards: VHA studies conducted in 2016 revealed that enrollment staff frequently did not process veterans' enrollment applications within the timeliness standard of 5 business days. These issues were found both at VHA's Health Eligibility Center (HEC)—the central enrollment processing center—and at local VA medical centers (VAMC) that also process enrollment applications. In response to an audit recommendation, HEC officials said they have begun prioritizing workload to help meet the timeliness standard. Additionally, the overall time needed to process enrollment applications may increase when staff need to place applications in a pending status, as pending applications require additional information, such as military service information, for staff to make enrollment determinations. However, none of the six VAMCs GAO reviewed had a specific policy for how to resolve pending applications. GAO found that VAMC enrollment staff had not resolved more than half of the pending applications GAO reviewed at these six VAMCs, some of which had been pending for more than 3 months at the time of the review. Although HEC developed new procedures for its enrollment staff to use when resolving pending applications, these procedures were not communicated to VAMCs. Officials from the VAMCs GAO reviewed said that they had not received guidance on these procedures and were confused about whether they would continue to have a role in this process. In the absence of HEC communication with VAMCs, there may be inefficiencies in resolving pending applications. VHA, through HEC, is assessing efforts to improve the timeliness of enrollment application processing and the accuracy of enrollment determinations. VHA lacks a standardized oversight process and reliable data to monitor enrollment processes system-wide: Although HEC officials said they are responsible for oversight of enrollment processes system-wide, VHA has neither sufficient policies that delineate this role nor procedures that document key oversight activities that should be conducted. For example, policies do not describe the oversight activities HEC should conduct to help ensure the accuracy of enrollment determinations system-wide. Further, VHA does not have reliable data for overseeing the timeliness of processing enrollment applications at VAMCs, which process 90 percent of the applications system-wide. Officials from the six VAMCs in GAO's review and HEC also had varying interpretations of how to measure the timeliness standard. For example, officials from four of the six VAMCs said the standard was met when enrollment staff entered an application into their local system, irrespective of whether an enrollment determination was made. In contrast, HEC officials said the measurement encompasses the time needed to make an enrollment determination, including any time the application was pending. Without reliable data that are consistently measured, VHA cannot accurately oversee the timeliness of application processing system-wide, or assess the extent to which VAMCs face challenges in implementing enrollment processes. To improve oversight, VHA, through HEC, recently implemented an effort to review the accuracy of some enrollment determinations. GAO recommends that VHA (1) define the responsibilities of VAMCs in resolving pending enrollment applications; (2) define oversight responsibilities to help ensure timely application processing and accurate enrollment determinations; (3) develop procedures for collecting reliable data system-wide to evaluate the timeliness of application processing; and (4) clarify its 5-day timeliness standard. VA concurred with all of GAO's recommendations and identified actions it is taking to implement them. |
The United States is exposed to several major hazards, in particular earthquakes and hurricanes, in coastal areas. As shown in figure 1, the Pacific, South Atlantic, and Gulf Coasts face the highest risk of earthquakes and hurricanes. According to the National Oceanic and Atmospheric Administration (NOAA), 53 percent of the nation’s total population, or approximately 153 million people, lived in coastal counties in 2003. Moreover, the total coastal population increased by 33 million people, or 28 percent, between 1980 and 2003. California led in coastal population change, with the number of residents increasing by 9.9 million people. Florida showed the greatest percentage population change between 1980 and 2003, increasing nearly 75 percent. The nation’s coastal population is expected to increase by more than 7 million people by 2008 (over current levels) and by 12 million people by 2015. The housing supply in coastal areas also continues to grow, despite the high risk of earthquakes and hurricanes. NOAA reported that coastal counties contained 52 percent of the nation’s total housing supply in 2000. The leading states in terms of total housing units in coastal counties were California, Florida, and New York, which together have 41 percent of the total housing supply in these counties. One study put the estimated insured value of coastal property in states bordering the Atlantic Ocean and Gulf of Mexico at $7.2 trillion as of December 2004. As shown in figure 2, properties along the Pacific and North-Atlantic Coasts and the Gulf of Mexico have some of the highest insured property values. The value of residential and commercial coastal property in Florida and New York was $1.94 trillion and $1.90 trillion, respectively, in 2004. Insurance coverage against natural catastrophes for a home may or may not be included in homeowners insurance contracts. For example, coverage against wind loss from an event such as a hurricane is typically included. However, in some areas of certain states—mostly coastal regions—wind coverage may be excluded from homeowners insurance contracts and may be available only through the surplus lines insurance market or a state-managed entity. Similarly, earthquake coverage is commonly excluded from homeowners insurance contracts and instead is sold separately by insurance companies or, in the case of California, through a state-managed program. The price of property and casualty insurance is affected by both the annual expected loss and the cost of diversifying the risk of catastrophic losses. Insurers can diversify the risk of catastrophic losses by, among other things, purchasing reinsurance, which is insurance for insurance companies, or by selling financial instruments such as catastrophe bonds. Insurance companies do not know in advance what their actual costs are going to be, because they can determine these costs only after a policy has expired. The insurer’s objectives are to calculate premiums that will make the business profitable, enable the company to compete effectively with other insurers, and allow the company to pay claims and expenses as they occur. When insurers, reinsurers, and investors in catastrophe financial instruments perceive that the expected frequency or severity of natural catastrophes has increased, they may increase the price of insurance. If a company believes that the risk of loss—for example, from flooding or earthquake—is unacceptably high given the rate that can be charged, it declines to offer coverage. While the federal government retains the authority to regulate insurance, it has given primary responsibility for insurance regulation to the states, in accordance with the McCarran-Ferguson Act of 1945. State insurance commissioners are responsible for regulating rates, monitoring the availability of insurance, and assessing insurance firms’ solvency. The insurance regulators of the 50 states, the District of Columbia, and the U.S. territories have created NAIC to coordinate regulation of multistate insurers. NAIC serves as a forum for the development of uniform policy, and its committees develop model laws and regulations that, when adopted by state legislatures or promulgated by state regulators, govern the U.S. insurance industry. Critics of state insurance regulation argue that insurance prices and terms of coverage, particularly for homeowners insurance in areas prone to natural catastrophes, are highly regulated and that the insurance industry is generally not allowed to respond freely to changing risks or market conditions. In particular, these critics say that insurance regulators do not allow private insurers in catastrophe-prone areas to charge rates sufficient to build surpluses or transfer risks to reinsurers, regulators may be subject to voter pressure and thus to legislative pressure to keep insurance premiums affordable and coverage readily available, and regulatory and political restrictions prevent markets from giving consumers accurate price signals regarding the risks of living in catastrophe-prone areas. NAIC officials told us that projected loss costs to cover the insurer’s catastrophe exposure vary widely depending on which risk-modeling firm the insurer selects to produce its catastrophe loss costs. Only future results prove whether insurance company actuaries or insurance regulator actuaries are correct. The officials said that one should not assume that insurers and their actuaries have perfect information about what catastrophes will occur during the next year and about how the economy will behave. They added that one should also not assume that actuaries working for insurance companies are always correct in their projections of the needed price for the future experience period and that actuaries working for insurance regulators are always wrong. In the aftermath of natural catastrophes, some insurers responded by limiting their exposure in catastrophe-prone areas with restrictions on underwriting, higher deductibles, and lower coverage limits. In particular, there were property insurance affordability and availability crises in the Gulf Coast states or Florida after Hurricane Camille in 1969, Hurricane Celia in 1970, Hurricane Andrew in 1992, and the 2005 hurricanes; and in California following the Northridge Earthquake in 1994. Various proposals have been put forth over the past 15 years seeking to have the federal government take a larger role—for example, as a reinsurer or by allowing insurance companies to accumulate tax-deferred reserves—in addressing the affordability and availability of natural catastrophe insurance. The federal government engages in a wide variety of insurance activities, among them providing multiperil crop insurance to farmers and flood insurance to homeowners and businesses. In addition, the federal government provides disaster assistance to individuals and households. FEMA, SBA, and HUD are the primary agencies administering federal disaster relief and recovery programs. FCIC provides insurance coverage for farmers who suffer financial losses when their crops are damaged by droughts, floods, or other natural disasters. By law, FCIC pays the premium for catastrophic coverage against losses of 50 percent of a farm’s normal yield at 55 percent of the market price. In addition, FCIC offers premium subsidies for “buy-up” coverage against crop, revenue, and prevented planting losses, with coverage for losses ranging from 50 to 90 percent of a farm’s normal yield. FCIC estimates that participation of eligible farmers is approximately 80 percent of acres planted. FEMA, through NFIP, offers insurance to homeowners and businesses for losses due to flooding and currently has 5.3 million policyholders. By law, NFIP must offer reduced premium rates for homes built in floodplains prior to the creation of flood insurance rate maps (pre-Flood Insurance Rate Map (FIRM) properties). About one quarter of NFIP policies are pre-FIRM and pay about 40 percent of the risk-based rate. According to NFIP, homes built in floodplains to an approved building code after the creation of flood maps pay actuarially sound premiums. Participation in the program is mandatory for homeowners with mortgages issued by federally regulated lenders on properties in special flood hazard areas (SFHA) where flood insurance is available. According to the RAND Corporation, about half of all homeowners who live in SFHAs purchase flood insurance. In addition to providing crop, flood, and other insurance, the federal government provides disaster assistance to individuals. FEMA provides disaster relief and recovery assistance to individual citizens through its Individuals and Households Program (IHP), which is intended to provide money and services to people in a disaster area when losses are not generally covered by insurance and property has been damaged or destroyed. IHP includes Housing Assistance (HA) and Other Needs Assistance (ONA). FEMA may provide five types of HA: financial assistance to rent temporary housing, “direct” temporary housing assistance, repair assistance, replacement assistance, and permanent housing construction in certain areas outside of the continental United States and other remote areas. FEMA may provide ONA grant funding for transportation expenses, medical and dental expenses, and funeral and burial expenses. ONA grant funding may also be available to replace personal property, repair and replace vehicles, and reimburse moving and storage expenses under certain circumstances. IHP is not intended to restore damaged property to its predisaster condition. SBA’s Disaster Loan Program (DLP) is the primary federal program for funding long-range recovery for private sector, nonfarm disaster victims. Eligible losses include under or uninsured damages and can not duplicate benefits received from another source (i.e. insurance recovery, FEMA, etc.). The Small Business Act authorizes SBA to make available the following two types of disaster loans: (1) physical disaster home loans to homeowners, renters, and businesses of all sizes, and (2) economic injury disaster loans to small businesses. Homeowners and renters can borrow up to $40,000 for repair or replacement of household and personal effects. Homeowners can also borrow up to $200,000 to repair or replace a primary residence. Businesses of all sizes can borrow up to $1.5 million to repair or replace disaster damaged real estate, machinery and equipment, inventory, etc. Small businesses can borrow up to $1.5 million for disaster related economic injury resulting from the declared disaster. The combined loans to a business for physical loss and economic injury cannot exceed $1.5 million. Homeowners and businesses must provide reasonable assurance that they can repay the loan out of personal or business cash flow, and they must have satisfactory credit and character. HUD also provides disaster recovery assistance through several programs. After the 2005 hurricanes, Congress appropriated $16.7 billion to the Community Development Block Grant (CDBG) program for disaster recovery. The CDBG program generally provides funding to metropolitan cities and urban counties that have been designated as entitlement communities and to states for distribution to other communities. Grant recipients must give maximum feasible priority to activities, including emergency-related activities, that (1) benefit low- and moderate-income families or aid in the prevention or elimination of slums or blight, or (2) meet urgent community development needs. However, HUD can waive regulatory and statutory program requirements to increase the flexibility of the CDBG funds for disaster recovery. These grants afford states and local governments a great deal of discretion to help them recover from presidentially declared disasters. Government natural catastrophe insurance programs were created because certain perils are difficult to insure privately and because, when private insurance is available, it may not be affordable. To keep natural catastrophe insurance available and affordable, government insurance programs operate differently than private insurance companies. Private insurance companies generally rely on premiums collected from those they insure to cover operating costs and losses and set premium rates at levels that are designed to reflect the risk that the company assumes in providing the insurance. These companies may also accumulate reserves to cover large losses. Federal and state government insurance programs also collect up-front premiums, but their rates do not always reflect the risks that the programs assume. Because premiums are inadequate to cover operating costs and losses, the government programs generally have limited resources and often face deficits after disasters. However, unlike private insurers, federal insurers may obtain funds after a catastrophic event through emergency appropriations. State programs may also access postevent funding through various means, including assessments on private insurers, bonds, and private reinsurance. State programs may also be postfunded through state general revenue funds and federal disaster relief payments. This structure has several implications. First, it may encourage homeowners in catastrophe-prone locations to seek coverage from government programs, crowding out the private market and increasing the government’s financial exposure. Second, homeowners may not receive appropriate price signals about the risk of living in catastrophe- prone locations. Third, taxpayers who live in less risky locations may be subsidizing those living in catastrophe-prone locations. Finally, the added burden of private insurers’ assessment obligations may provide another reason for them to leave already stressed markets. Federal natural catastrophe insurance programs fill gaps in private insurance markets and help limit disaster relief payments. For example, FCIC and NFIP were created because private insurers had determined that multiperil crop and flood losses were uninsurable and declined to provide coverage. A 1937 study by the Executive Committee on Crop Insurance, which noted that commercial attempts to insure against crop losses had been unsuccessful, provided the impetus for creating FCIC in 1938. Initially, the program was experimental and suffered heavy losses. The Federal Crop Insurance Act of 1980 expanded the program to replace free disaster coverage (in the form of compensation to farmers who were unable to plant crops and who suffer yield losses) with insurance. The flood insurance program was initiated because it had become clear by the 1950s that private insurance companies could not profitably provide affordable flood coverage because of the catastrophic nature of flooding and the impossibility of developing an actuarial rate structure that could adequately reflect the risk to flood-prone properties, among other reasons. One of the primary purposes of the National Flood Insurance Act of 1968, which created NFIP, was to reduce federal expenditures for disaster assistance and flood control. State natural catastrophe insurance programs were created to avoid homeowners insurance crises that threatened the states’ housing markets. For example, the California Earthquake Authority was formed in 1996 in response to a crisis in the residential property insurance market following the Northridge earthquake in 1994. According to the Insurance Information Institute, California insurers had collected only $3.4 billion in earthquake premiums in the 25-year period prior to the Northridge earthquake but had paid out more than $15 billion on Northridge claims alone. Moreover, insurers representing about 95 percent of the homeowners insurance market in California began to limit their exposure to earthquakes by writing fewer or no new homeowners insurance policies, triggering a crisis that threatened California’s housing market and stalled the state’s recovery from recession. See appendix II for a more detailed description of state natural catastrophe insurance programs. Florida Citizens is a nonprofit tax-exempt entity that provides residential and commercial property insurance coverage when private insurance is not available. Florida Citizens was established in 2002 after two separate insurance pools—the Florida Windstorm Underwriting Association (FWUA) and the Florida Residential Property and Casualty Joint Underwriting Association (JUA)—were combined. In addition, the Florida Hurricane Catastrophe Fund (FHCF) provides an alternative to traditional hurricane reinsurance, reducing the cost of coverage significantly below that of private reinsurance and lowering the cost of insurance to homeowners. The FHCF was established in 1993 in response to Hurricane Andrew, which resulted in a severe shortage of catastrophe property reinsurance capacity, stricter policy terms and conditions, and sharp increases in property catastrophe reinsurance rates in the year following the storm. The post-Andrew reaction of a number of insurance companies was to attempt to reduce their underwriting exposure, and 39 insurers stated in early 1993 that they intended to either cancel or not renew approximately 844,000 policies in Florida. Other states—including Alabama, Louisiana, Mississippi, and Texas—have created state funds to make natural catastrophe insurance available and affordable. Because government natural catastrophe insurance programs are often created to ensure the availability and affordability of natural catastrophe insurance, homeowner premiums for these programs—although risk- related—are generally not based entirely on the homeowners level of risk. Federal natural catastrophe insurance program premium rates are often set by statute and involve government subsidies. For example, to encourage broad participation in the crop insurance program, federal law seeks to ensure that the premiums are affordable to all farmers by requiring FCIC to pay a portion of the premium cost. Specifically, FCIC offers farmers varying subsidy rates for crop insurance, depending on the level of protection they seek. Crop insurance subsidies totaled about $2.3 billion in crop years 2005 and 2006. In addition, federal crop insurance legislation directs FCIC to operate at a loss ratio of no more than 1.075. A loss ratio greater than 1.00 indicates that the program paid more in claims than it collected in premiums. Furthermore, we have previously reported that NFIP is not designed to be actuarially sound. Annually, flood insurance subsidies total about $1.3 billion. State natural catastrophe insurance program premium rates may also be set by statute. Florida Citizens historically has been required to maintain premium rates that were not competitive with the private insurance market. However, in January 2007, the Florida Legislature allowed Florida Citizens to charge competitive rates. Even by 2006, Florida Citizens was the largest property insurer in Florida. It receives much of its reinsurance coverage from the FHCF, which charges premium rates that are estimated to be about a quarter to a third the cost of private market reinsurance. The program can charge these rates because of its tax-exempt status and ability to postfund claims losses through bonds, among other advantages. These two state programs are able to charge lower premiums than private insurance companies, encouraging more people to seek coverage in the state programs and leaving the state more financially vulnerable in the event of a large hurricane. State natural catastrophe insurance program premium rates are also subject to approval by state insurance regulators that have generally resisted rate increases. The Mississippi Windstorm Underwriting Association (Mississippi Windpool) provides coverage against windstorms and hail for people in the six coastal counties of Mississippi who might not be able to get wind coverage in the private insurance market. After Hurricane Katrina, the Mississippi Windpool sought a rate increase of almost 400 percent, primarily to cover the increased cost of reinsurance. The state insurance regulator granted a 90 percent increase. Furthermore, the state government will use $50 million in federal disaster recovery funds provided by HUD to offset the increased cost of reinsurance in 2007 and 2008. In addition, the state government created a reinsurance fund that uses state general revenue funds to offset the increased cost of reinsurance. Similarly, the Texas Windstorm Insurance Association (Texas Windpool) offers wind and hail coverage in 14 coastal counties and other specified areas. By law, Texas Windpool residential and commercial premium rates may not increase more than 10 percent above the rates for noncommercial windstorm or hail insurance that are in effect at the time the request for an increase is filed. However, the insurance commissioner may suspend this rule to ensure rate adequacy in the catastrophe area. In May 2006, the Texas Windpool sought a 19 percent residential and 24 percent commercial rate increase, but the insurance commissioner approved a 3.1 percent residential and 8 percent commercial rate increase. When the Texas Windpool sought a 20 percent residential and 22 percent commercial rate increase in November 2006, the insurance commissioner approved a 4.2 percent residential and 3.7 percent commercial rate increase. In both instances, the insurance commissioner stated that he favored an incremental approach to strengthening the Texas Windpool that did not put an undue economic burden on coastal homeowners. Unlike private insurance companies, government natural catastrophe insurance programs often do not employ accrual accounting and are not always required to accumulate adequate resources to meet their obligations. Generally, insurance premiums are paid in advance, but the period of protection extends into the future. Private insurers are required by statutory accounting rules to establish reserves for incurred or known claims and for the cost of “incurred but not reported” claims to ensure that the premiums collected in advance will be available to pay future losses. Incurred but not reported claims are insured losses that have already happened but that for any of a variety of reasons have not yet been reported to the insurer. Most government natural catastrophe insurance programs are not required to have these resources, because they are structured to postfund losses. As we have previously mentioned, NFIP and the federal crop insurance program are postfunded by emergency appropriations from federal taxpayers. State programs are generally postfunded by several mechanisms, including assessments on private insurers, bonds, and proceeds from general revenues. In most property and casualty insurance lines, state assessments are often passed through to policyholders. As a result, homeowners living in less risky locations also contribute to cover the shortfall—a scenario known as cross-subsidization. In those states where assessments cannot be passed through in some manner, private insurers must pay the assessments, while at the same time paying large claims from their own policyholders. In such instances, some companies may be reluctant to continue offering coverage in the state or may become insolvent. In the wake of recent natural catastrophes, some government natural catastrophe insurance programs suffered losses that eliminated their accumulated resources. For example, NFIP reported unexpended cash of approximately $1 billion following fiscal year 2004, but the program had suffered almost $16 billion in losses from Hurricane Katrina alone as of May 31, 2007. Similarly, Florida Citizens’ high-risk account had a surplus of approximately $1.1 billion prior to the 2004 hurricane season, but the program incurred over $2 billion in losses from the 2004 hurricanes and almost $2 billion in losses from the 2005 hurricanes. The FHCF had accumulated net assets of $5.5 billion at the end of the 2004 fiscal year but had an estimated shortfall of approximately $1.4 billion following reimbursements to participating insurers after the 2004 and 2005 hurricane seasons. Prior to 2007, the Mississippi Windpool did not have resources beyond premiums and reinsurance because year-end profits and losses were shared by member companies. By the end of 2005, following Hurricane Katrina, the Mississippi Windpool had incurred a net loss of $473 million. In Louisiana, Citizens Property Insurance Corporation (Louisiana Citizens), which has a structure similar to that of Florida Citizens, had $80 million in cash reserves prior to the 2005 hurricane season but suffered more than $1 billion in losses after Hurricanes Katrina and Rita. Emergency appropriations authorizing funding for federal natural catastrophe insurance programs after disasters have often been significant. In the case of FCIC, not only are premium rates subsidized by almost 59 percent for the most popular coverage, but farmers may receive additional emergency disaster relief—for example, farmers received $1.6 billion following Hurricane Katrina. In the case of NFIP, not only are premium rates for pre-FIRM homes subsidized up to 60 percent on average, but after Hurricane Katrina NFIP was authorized to borrow over $20 billion to pay claims. State natural catastrophe insurance programs have also often required postfunding to satisfy their obligations in the wake of large natural catastrophes. For example, to fund its 2004 and 2005 deficits, Florida Citizens assessed insurance companies in most property and casualty lines $516 million and $205 million, respectively, and these amounts will be passed through to policyholders. In addition, the Florida Legislature appropriated $715 million from the general revenue fund to reduce the size of the 2005 deficit. Furthermore, to fund a bond issuance to cover the FHCF’s shortfall, eligible Florida insurance policyholders incurred a 1 percent assessment that will be levied over at least 6 years beginning in January 2007. In June 2006, the FHCF issued a $1.35 billion postevent revenue bond to cover 2005 losses, and in July 2006 it issued a $2.8 billion preevent financing bond to provide liquidity for 2006 and future years. Similarly, Louisiana Citizens assessed all property insurance companies in the state $193 million after the 2005 hurricanes. It has also issued a postevent bond for $978 million to cover 2005 losses that will be financed by emergency assessments on insurers in certain lines of property and casualty insurance. These assessments are levied directly on policyholders, who may claim a tax credit against state income tax. The assessments will continue for as many years as needed to cover the plan’s deficit. Both Florida Citizens and Louisiana Citizens have been declared to be municipalities rather than insurance companies by their respective state legislatures, and as a result cannot declare bankruptcy until the bond obligations are satisfied. In addition, the Mississippi Windpool funded its deficit through $525 million in assessments on member companies in proportion to their share of business in the state. At the time, these assessments could not be directly passed through to policyholders. At least one private insurance company found that its assessment liability was more than the entire amount of premiums it collected in the state and was forced to liquidate. Finally, the Texas Windpool assessed private insurance companies in Texas for the first $100 million in program losses and expenses from Hurricane Rita beyond its ability to pay from premiums and other income. The 2005 hurricanes illustrated how many Americans are uninsured and underinsured for natural catastrophes and the federal government’s role in recovery from natural catastrophes. An analysis by HUD found that of the 192,820 owner-occupied homes with major or severe damage from Hurricanes Katrina, Rita, and Wilma, approximately 78,000, or about 41 percent, did not have any insurance or did not have enough insurance to cover the damage incurred. Homeowners do not purchase natural catastrophe insurance for a variety of reasons, including financial reasons. Moreover, buying a natural catastrophe insurance policy does not guarantee complete coverage for a dwelling. For example, if the home’s replacement value is calculated inaccurately, the homeowner will buy too little insurance to cover all of the damage. More and more frequently, responsibility for supporting the needs of individuals who lack adequate insurance against natural catastrophe risk is falling to the federal government. We estimate that the federal government made approximately $26 billion available for homeowners and renters who lacked adequate insurance in response to the 2005 hurricanes. Homeowners may not purchase natural catastrophe insurance because they face budget constraints, underestimate the risk they face, or fail to understand the protection such insurance affords. Information on the number of individuals who are uninsured against natural catastrophe risks is somewhat limited but helps demonstrate the extent to which homeowners do not purchase natural catastrophe insurance. About 41 percent of homes that sustained severe damage from any peril during the 2005 hurricanes were uninsured or underinsured. HUD reported that of the 60,196 owner-occupied homes with major or severe wind damage, almost 23,000, or 38 percent, lacked insurance against wind loss. Also, the Insurance Information Institute reported that about 86 percent of Californians did not have earthquake insurance on their homes in 2004. Furthermore, only about one half of eligible single-family homes in Special Flood Hazard Areas (SFHA) nationwide have purchased flood insurance. In areas outside of SFHAs, where flood insurance is voluntary, only about 1 percent of owners of single-family homes have purchased flood insurance, even though 20 to 25 percent of NFIP’s claims come from outside of SFHAs. Purchasing insurance to protect homes against natural catastrophes is mandatory for some homeowners, but often it is voluntary. For example, homeowners who do not have mortgages are generally not required to have property and casualty coverage, and in some areas certain types of hazards are routinely excluded from homeowners policies. As we have seen, wind coverage is often excluded in some coastal areas, and the surplus lines market or a state-managed entity may offer coverage separately. Although lenders may require homeowners to purchase this supplemental insurance, those who own their homes outright may choose not to buy it. A similar situation exists with earthquake coverage in certain areas of the country. In earthquake-prone areas, earthquake coverage is commonly excluded from the homeowners insurance contract and is sold separately by insurance companies or, as in the case of California, by a state-managed program. In general, lenders do not require earthquake insurance as a condition of extending a mortgage. Consumers will purchase natural catastrophe insurance on the basis of their perception of risk. Studies have shown that consumers often consider the likelihood of a future catastrophe to be much lower than insurance companies’ estimates. According to academic research, some homeowners may underestimate the risk of loss, have an overly optimistic view of expected losses, or be unaware that insurance is available. One insurance expert has concluded that if people believe that the chance of a serious event occurring is low, they often consider insurance unnecessary and will not seek out information on its benefits and costs. Reluctance to purchase insurance protection can be compounded by budget constraints. For some homeowners with relatively low incomes, disaster insurance is considered an expense that can be made only after taking care of necessities. An insurance expert has noted that insurance trade associations, consumer advocacy groups, and governments can provide better information to consumers about risk probabilities, insurer profitability, and prices to motivate better insurance purchasing behavior. One study of those living in earthquake zones has identified a variety of reasons for declining to purchase earthquake insurance. Some consumers are unwilling or reluctant to pay high premiums to insure against potentially large but rare disaster losses. Some consumers believe that the deductible for earthquake insurance—the standard deductible is 15 percent of the value of the home—is too high, given the premium rates and amount of coverage provided. A study of flood insurance market penetration rates cites several reasons why people do not purchase flood insurance. For property owners in SFHAs, the decision to purchase insurance is affected primarily by its price. Outside of SFHAs, property owners are not purchasing flood insurance because they may not be aware of flood risk, and because flood insurance agents have less interest in promoting flood insurance and in learning how to write flood policies. Also, certain limitations of the coverage, such as limits on basement flooding, make the policies less attractive in inland areas. Homes may be underinsured because replacement costs are not calculated accurately. Replacement cost has been defined as the amount necessary to repair or replace the dwelling with material of like kind and quality at current prices. Replacement cost may not be calculated accurately for several reasons, including the effects of inflation, custom home building, remodeling, high demand for contractors, and changes in building codes following a natural catastrophe. Generally, property insurance losses are partial losses rather than total losses. However, in catastrophe-prone areas, the prospect of a total loss of property is real. If a homeowner suffers a total loss of property as a result of a natural catastrophe and the replacement cost has not been properly calculated, the property will not be fully insured. An insurance industry consultant estimates that in 2006 approximately 58 percent of the residential housing stock in the United States was undervalued for insurance purposes by an estimated 21 percent. Homeowners insurance coverage can vary by type of policy and from insurer to insurer, but there are fundamental similarities. The broadest coverage generally provides that a policyholder will receive full replacement cost with no deduction for depreciation (up to the policy limit) if a policyholder maintains coverage limits of 80 percent or more of the dwelling’s full replacement cost. Otherwise, the homeowner receives a lesser amount according to the formula in the policy (see sidebar). The reasons that replacement costs may not be calculated accurately, leaving homeowners underinsured, are complex. First, replacement costs must be periodically updated to account for inflation. Second, beginning in the early 1980s developers began building more custom homes, and a significant percentage of homes were remodeled, sometimes extensively. Historically, the methodologies that the insurance industry used to calculate replacement costs did not always capture custom features. The industry has improved its calculation methodologies, but an insurance industry consultant told us that a large number of policies had not been properly updated. Furthermore, homeowners whose properties were remodeled may not have understood the need to tell their insurers about the remodeling, possibly to avoid rate increases. The problem of underinsurance can be exacerbated in the wake of a natural catastrophe when demand for contractors and materials to repair homes is high and the supply is tight. This phenomenon is known as “demand surge.” In these circumstances, the short-term costs of repairing and rebuilding homes can escalate substantially, and replacement costs become significantly higher. In addition, over time a community may implement improved building codes, so that rebuilding may have to conform to stricter standards than those that were in place when a dwelling was first built. This situation can also make replacement costs much higher, as it did in Florida in the aftermath of Hurricane Andrew in 1992. As of May 2007, Congress approved approximately $88 billion in emergency appropriations to assist in relief and recovery efforts in the Gulf Coast states following the 2005 hurricanes. Three federal agencies—FEMA, SBA, and HUD—received over $60 billion, or about two-thirds, of this amount. As we have previously noted, these agencies play a significant role in distributing federal disaster relief funds to individual victims. We estimate that, as of June 2007, the agencies had obligated approximately $26 billion, or between a quarter and a third, of the emergency appropriations to homeowners and renters in Alabama, Florida, Louisiana, Mississippi, and Texas who lacked adequate insurance (see fig. 3). Federal disaster assistance for homeowners and renters comes from FEMA, SBA, and HUD. For example: For disasters declared between October 1, 2004, and October 1, 2005, FEMA could provide a maximum of $26,200 for housing and other needs assistance to an individual or household in a disaster area if property was damaged or destroyed and the losses were not covered by insurance. In total, FEMA obligated over $15 billion to homeowners and renters through IHP grants and manufactured housing. We have reported extensively on the difficulties that FEMA experienced in distributing disaster assistance through IHP. Homeowners and renters can borrow up to $40,000 in personal property loans from SBA to repair or replace clothing, furniture, cars, and appliances damaged or destroyed in a disaster. SBA can also make real property loans up to a maximum of $200,000 to repair or restore a main residence to its predisaster condition. Any proceeds from insurance coverage on the personal property or home are deducted from the total loan amount. The interest rates on SBA disaster loans do not exceed 4 percent for those who are unable to obtain credit elsewhere or 8 percent for those who can get other credit. As of January 31, 2007, SBA approved over $5 billion in disaster loans for homeowners and renters after the 2005 hurricanes, at an interest subsidy cost of almost $800 million to the federal government. We have reported on the difficulties that SBA experienced in distributing disaster loans. The largest recovery program for homeowners and renters after the 2005 hurricanes was HUD’s CDBG program, which received $16.7 billion in supplemental appropriations to help homeowners with long-term recovery (including providing funds for uninsured damages), restore infrastructure, and fund mitigation activities in the declared disaster areas of Alabama, Florida, Louisiana, Mississippi, and Texas. To receive CDBG funds, HUD required that each state submit an action plan describing how the funds would be used, but the agency waived some program requirements for disaster recovery purposes. For example, HUD granted a waiver to Mississippi so that a portion of the CDBG funds could be used to pay reinsurance costs for 2 years for wind pool insurance maintained by the Mississippi Windpool. Two of the states receiving the largest allocation from the emergency CDBG appropriations were Louisiana and Mississippi, both of which opted to direct the vast majority of their housing allocations to homeowners. Both states based the amount of compensation that homeowners received on the value of their homes before the storms and the amount of damage that was not covered by insurance or other forms of assistance. The grants provided up to $150,000 for eligible homeowners. Both programs also attached various conditions to the acceptance of grants, such as requiring homeowners to rebuild their homes above the latest available FEMA advisory base flood elevation levels and establishing covenants to the land requiring that homeowners maintain hazard and flood insurance. It will be a challenge for federal, state, and local governments to sustain their current role in natural catastrophe insurance going forward. The Comptroller General of the Unites States has repeatedly warned that the current fiscal path of the federal government is “imprudent and unsustainable.” In addition, we reported that, for state and local government sectors, large and growing fiscal challenges will begin to emerge within the next few years in the absence of policy changes. The fiscal challenges facing all levels of government are linked and should be considered in a strategic and integrated manner. We identified seven public policy options for changing the role of the federal government in natural catastrophe insurance (see fig. 4). These policy options have many variants and are often contained in other proposals, including some bills that are before Congress. Some of these proposals are also being debated in venues such as the NAIC committees. We examined the advantages and disadvantages of these policy options and evaluated them against four broad public policy goals. These goals are charging premium rates that fully reflect actual risks, encouraging private markets to provide natural catastrophe insurance, encouraging broad participation in natural catastrophe insurance limiting costs to taxpayers before and after a disaster. Our analysis showed that each of the seven options met at least one of the policy goals but failed to meet others. The first option—a mandatory all- perils homeowners insurance policy—would help create broad participation and could provide a private sector solution. But this option could also require subsidies for low-income residents and thus potentially create substantial costs for the federal government that would have to be balanced against money saved from reduced disaster relief. A second option would involve providing federal reinsurance for state catastrophe funds—a change that could lead to greater private insurance market participation but that could also displace the private reinsurance market. A third option, establishing a federal lending facility for state catastrophe funds, could help such funds with financing needs after a catastrophe. But this option exposes the federal government to the risk that a state fund might not repay a loan and thus might not limit taxpayer exposure. The remaining four options include tax-based incentives to encourage greater participation by insurers and homeowners in managing natural catastrophe risks. These incentives offer some advantages, but could also represent ongoing costs to the federal government and taxpayers. A mandatory all-perils policy would require private insurers to provide coverage against all perils in a single standard homeowners policy that would be priced according to the risk of natural hazards each homeowner faced. For example, the policy would cover not only theft and fire but also wind, floods, and earthquakes. It would also be mandatory for all homeowners. This type of option offers several potential advantages. First, a mandatory all-perils policy, by definition, would encourage broad participation in natural catastrophe insurance programs. Moreover, including all American homeowners in natural catastrophe coverage could help reduce the number of Americans needing postdisaster payments and possibly limit the federal government’s exposure. An all-perils policy would also eliminate existing gaps in coverage and remove the uncertainty many homeowners face in determining whether certain perils are covered and by whom—an issue that was spotlighted after Hurricane Katrina, when disputes emerged between private insurers and homeowners over the extent of the insurers’ obligations to cover certain damages. Finally, because it would be mandatory and broad-based, an all-perils policy could lessen the problem of adverse selection that is often identified as the reason that some types of catastrophes, such as flooding, are considered to be uninsurable. This type of policy would spread risks geographically and potentially would make the policy more affordable than other options. However, this option is not without its disadvantages. First, it is unclear how private markets would be encouraged to underwrite all risks. Second, a mandatory all-perils policy might not be a cost-effective solution for the federal government, because it could create affordability concerns for low- income residents in certain areas and might require targeted government subsidies. If they did not sufficiently reduce postevent disaster relief, these subsidies could increase costs to taxpayers. Third, an all-perils policy would undoubtedly be more expensive than current homeowner policy premiums in some regions of the country. As a result, at least during the transition, it could lead to complaints about higher premium costs from residents of catastrophe-prone areas. Moreover, homeowners in relatively low-risk areas could wind up subsidizing the costs of insurance for those living in high-risk areas. Fourth, enforcement would be extremely challenging, as we have seen with mandatory flood insurance in communities in designated floodplains. Finally, this policy option faces opposition from the private insurance industry, in part because of concerns about state insurance regulators impeding private insurers’ ability to charge premiums that reflect the actual risk of loss in catastrophe-prone areas. Private insurers have also traditionally opposed all-perils policies because of the difficulty of pricing flood and earthquake coverage. One insurance company has said that an all-perils policy would cause rates to skyrocket and could cause many insurers to abandon the homeowners insurance market. NAIC officials told us that the homeowners market was a $55 billion market—not counting flood and earthquake exposure—and that most insurers were unlikely to walk away from a market this large. A federal reinsurance mechanism would provide an additional layer of insurance coverage for very large catastrophes, or megacatastrophes, and could be implemented in two ways. The first version of this option would create a federal mechanism that would serve as a backstop for state catastrophe funds to increase the amount of insurance and reinsurance available to states, expand the availability of catastrophe coverage, and possibly improve its affordability. States would create catastrophe funds and enter into agreements with the federal government—possibly, but not necessarily, the U.S. Treasury—and pay premiums for the reinsurance that would be used to support the reinsurance fund. Each state’s payments would be based on risk and determined using actuarial and catastrophe modeling, and the states would be responsible for collecting premiums from insured commercial and residential property owners. The federal fund would provide payments to state funds for storms of a certain magnitude up to some predetermined level of payments. If the federal reinsurance fund was not adequately financed at the time of a catastrophe, it would issue government-backed bonds. A related but different version of this federal reinsurance option would authorize the Secretary of the Treasury to create an auction process for the sale of reinsurance contracts to private and state insurers and reinsurers. The secretary would make available reinsurance contracts covering both earthquakes and wind events. The auction process would be open to state and private insurers and reinsurers and would take place in at least six separate geographic regions, so that risks would be based on local factors and insurers in less risk-prone areas would not be subsidizing those in riskier areas. State programs would have to reach a minimum loss level before they would be eligible for federal funds. This version also establishes a disaster reinsurance fund within the U.S. Treasury to be credited with, among other sources of funds, amounts received from the sale of reinsurance contracts. The Treasury would be authorized to issue debt if the fund’s resources were insufficient to pay claims—and reinsurance premiums paid to Treasury would be used to make interest payments to debt holders—but the fund would not receive federal appropriations. A national commission on catastrophe risks and insurance loss costs would advise the secretary. Both versions of this option offer advantages and disadvantages. First, federal reinsurance is advantageous because it has the potential to help insurance companies by limiting timing risk—the possibility that events will occur before insurers have collected enough premiums to cover them—potentially making insurers more willing to underwrite natural catastrophe insurance policies. Second, primary insurance companies may be less interested in canceling catastrophe insurance policies in coastal regions after a disaster if stable sources of reinsurance are available from state catastrophe funds. This option could also encourage the provision of catastrophe insurance via private insurance markets by limiting private insurers’ liability for very large events and thus increasing their willingness to offer insurance for less catastrophic events. And a greater supply of natural catastrophe insurance could reduce the cost of insurance as competition for business intensified. Third, this option may also be advantageous because, if it were appropriately structured—that is, if program losses were funded by upfront premium payments—federal reinsurance should not require the use of taxpayer dollars. Finally, to the extent that this option increased the availability and affordability of catastrophe insurance, it would be preferable to postdisaster assistance and could limit the need for some types of postevent government payouts. While federal reinsurance has some appealing options, it is not without disadvantages. For example, neither version of the reinsurance option is intended to displace or compete with the private reinsurance market, because reinsurance contracts would not be sponsored in markets where private reinsurance markets offered coverage. However, federal reinsurance could compete with and possibly displace private reinsurance if the government offered coverage at levels that were well within private market capacity or set premium rates below what the private sector would charge for comparable risk. While the stated intent of this option is to charge a premium that fully reflects the risk assumed by the federal reinsurance fund, political and consumer pressures could be put on the federal fund to underprice premiums in terms of risk to keep premiums low for policyholders in high-risk areas. Charging a reinsurance premium that was not fully risk-based would expose the federal fund and the government to potentially significant unfunded contingent insurance risk. As a result, federal reinsurance could disproportionately benefit those living in high- risk areas. Should the fund experience losses that exceeded the premiums collected, the difference would have to be paid by the taxpayers, creating a cross-subsidy that favored those in catastrophe-prone areas. Also, the existence of federal reinsurance might affect market discipline, leading private insurers and state catastrophe insurance funds to loosen underwriting guidelines—that is, to insure properties that would not have been insurable without the availability of (low-cost) federal reinsurance. Such a change could be costly for the reinsuring federal facility. As a result, a federal reinsurance role could inadvertently encourage further development and population growth in areas with high natural catastrophe risk. Finally, government natural catastrophe insurance programs are not purely insurance programs and may have social goals. But if the government plans to intervene in the catastrophe insurance market, it may want to use mechanisms that mimic as closely as possible what operating private markets could have been expected to do. When federal insurance programs mimic private insurance, and base decisions on risk (as consistent with social goals), then government losses are more likely to be contained. A federal lending facility would allow the federal government to use its borrowing power to extend temporary loans to state catastrophe funds. State catastrophe funds may not have the creditworthiness to borrow at acceptable interest rates. One proponent of this plan has suggested that the private insurance market could handle all or nearly all catastrophe exposure, but possibly not at the moment the catastrophe happened. Creating a lending facility in the federal government would allow the government to provide the capital to meet the temporary shortage and spread the repayment over time without assuming the underwriting risk held by the insurers. Under this option, state catastrophe funds would be required to secure private reinsurance and would have the ability to sell catastrophe bonds to repay the money loaned to them by the federal government. The loans would be made at market prices to guarantee that capital was efficiently allocated and—given that an insurance company that has just paid out a large claim does not have the same quantity or quality of assets as a solvent insurer or bank—would be secured both by the future income stream of premium payments from state residents through insurance companies to the state catastrophe funds and by bond proceeds. The loans would be of short duration, perhaps 2 to 3 years at maximum, and would provide state catastrophe funds with encouragement and time to access the private capital market. State catastrophe funds would be expected to demonstrate to the federal lending facility that the states were doing all that they could to attract private capital. A proposed trigger for the federal lending facility would be a megacatastrophe. The creation of a federal lending facility would have several advantages. First, a federal lending facility would shift timing risk, which is significant in the catastrophe insurance business, from the insurance industry to the federal government. The federal government, because of its borrowing power, is uniquely able to deal with timing risk. Second, a federal lending facility could mean that taxpayers would assume little or possibly no insurance risk, because the insurers would be responsible for paying all of the losses from catastrophic events, although not necessarily in the year of the catastrophe. Finally, through the requirement that the states do all that they can to attract private capital, the option may lead to insurance regulatory reforms in areas such as rate regulation that have inhibited the influx of private capital. A federal lending facility would also have a number of disadvantages. First, it is not clear how this federal lending facility would encourage premiums that reflected risks, would foster broad citizen participation, or would be a cost-effective solution. Second, it would expose the facility and ultimately taxpayers to credit risk if a state did not repay its debt. Third, a federal lending facility could also require the creation of a new federal entity or structure to administer the system. Fourth, like the federal reinsurance option, such a lending facility could have a competitive advantage over the private reinsurance sector, particularly if the terms were too easy or if borrowed funds did not have to be repaid. States in high-risk regions would have a financial incentive to seek nonmarket terms and conditions in loans. Finally, this option would decrease the incentives for insurers and reinsurers to accurately assess, underwrite, and price risk. A fourth policy option would be to permit private insurers to establish tax- deferred reserves for future catastrophes. This option could encourage some insurers to maintain or expand their catastrophe insurance coverage in regions with significant or projected catastrophe exposures. This option is also intended to provide insurers with an incentive to write catastrophe coverage in hazard-prone areas while improving their own financial strength. It would require amending the U.S. Tax Code, because current tax laws and accounting principles discourage U.S. property and casualty insurers from accumulating long-term assets specifically for payment of future losses by taxing these assets. Because the size and timing of disasters that have not taken place is uncertain, assets set aside for catastrophe losses, together with any interest accrued, are taxed as corporate income in the year in which they are set aside. Although there is a federal income tax deduction for losses that have already occurred, reserves for uncertain future losses are not tax deductible. Tax-deferred reserving has its advantages. Tax-deferred reserving could mean that state regulators would be more willing to approve risk-based rates, because premiums could now be set aside rather than flow into profits. Consistent with the intended purpose of this option, tax-deferred reserving could increase the willingness of insurance companies to increase capacity without risking insolvency, because the companies would be less dependent on the uncertain prices available in reinsurance markets. In this case, the option would encourage a solution by private insurance markets and more broad-based participation in catastrophe insurance programs. Finally, this approach could reduce the need for state catastrophe insurance mechanisms by increasing the willingness of private insurers to remain or enter certain catastrophe-prone markets, such as Florida and other Gulf Coast states. However, tax-deferred reserving also raises a number of broader issues that must be considered. Tax-deferred reserving would reduce current federal tax revenue. However, as with other options, the net cost would have to be determined by weighing the tax cost against potential savings from federal postdisaster assistance programs. Deferring taxes on reserves for insurance companies could also be disadvantageous if this system created tax benefits that favored one type of activity over another. For example, to the extent that tax-deferred reserving became prevalent, it could displace the reinsurance market or other forms of hedging. Finally, such reserves could also be subject to manipulation or abuse if insurers used them to obscure current income by smoothing income flows across years. Like tax-deferred reserves, the fifth policy option would also require amending the U.S. Tax Code to provide a tax incentive, but this one would be aimed at homeowners, who would be allowed to accumulate before-tax funds to pay expenses related to disasters. The accounts would operate much like those currently in use for health care expenses, allowing homeowners to withdraw both savings and interest for qualified disaster expenses such as deductibles, uninsured losses, flood damage, and structural upgrades to mitigate damage from future storms. A bank or another designated organization would be the custodian for these accounts. Under one current option, homeowner contributions would be limited to (1) $2,000 for individuals with homeowners insurance and deductibles of not more than $1,000, and (2) the lesser of $15,000 or twice the insurance deductible for homeowner insurance deductibles of more than $1,000. In June 2007, the South Carolina Legislature passed legislation authorizing the creation of catastrophe savings accounts for use by state residents in paying natural catastrophe insurance deductibles. This option could induce more homeowners to participate in natural catastrophe insurance programs. Moreover, allowing homeowners to use tax-deferred savings to cover mitigation expenses might encourage more mitigation activities to reduce natural catastrophe risk. However, implementation challenges pose disadvantages that would have to be addressed. For example, it is unclear to what extent such a mechanism would encourage those who are not insured to purchase insurance. Rather than increasing participation, it could result in a tax benefit for those who are already insured. Like the tax-free reserves option, these savings accounts would also cost the federal government in reduced tax revenues. But once again, the actual net cost to the government would depend on the potential offsetting savings from postcatastrophe funding mechanisms. The sixth policy option would create certain tax advantages for catastrophe bonds. Historically, catastrophe bonds have been created in offshore jurisdictions where they are not subject to any income or any other tax (i.e., in tax havens). This option would facilitate the creation of onshore transactions, potentially reducing transactions costs and allowing for increased regulatory oversight. Tax treatment of catastrophe bonds would be similar to the treatment received by issuers of asset-backed or mortgage-backed securities that, for example, are generally not subject to tax on the income from underlying assets, which is passed on to investors. More favorable tax treatment of catastrophe bonds would increase the ability of insurance markets to access capital markets by making these products more attractive to investors. Making catastrophe bonds more attractive to issuers and investors could, in turn, make insurance and reinsurance companies more willing to underwrite catastrophe risk and increase the availability of coverage, because these companies could pass on more catastrophe risk to investors. One disadvantage of this option is that it is not clear how its implementation would encourage premiums that fully reflect risk or how it would encourage broad-based participation in catastrophe insurance markets. It is also not clear how this option would be a cost-effective solution for the federal government when both predisaster and postdisaster costs are counted. Some reinsurers have pointed out that favorable tax treatment of catastrophe bonds could be disadvantageous because it could create a new class of reinsurer that would operate under regulatory and tax advantages not afforded U.S. reinsurance companies. Finally, recent catastrophe bond issuances by the two largest U.S. primary insurance companies may indicate that catastrophe bonds do not need a different tax treatment to make them economically viable. However, if market transparency and the development of uniform terms and conditions do not take place, only the largest insurers may be able to take advantage of catastrophe bonds. The final policy option we examined was a state plan, funded by state property taxes, that would require mandatory all-perils natural catastrophe insurance coverage on residential property. All primary residential properties in a state would be required to have catastrophe insurance coverage. Participating insurers would assume the primary risk on the property and would have reinsurance from a qualifying reinsurance company. The state would pay an annual natural catastrophe insurance premium financed by an annual property tax assessment on all residential and commercial properties in the state, and homeowners could deduct the cost from their federal taxes. The insurance coverage would be provided by private insurance companies selected by a government administrator who would qualify them as providers of catastrophe insurance. To ensure that premiums were reasonable, the primary and reinsurance coverage would require large deductibles that would be paid in layers by the homeowner, the state, and the federal government. Homeowners would be responsible for the first 10 percent of the value of the home, with a state catastrophe fund paying the next layer of the deductible. The state would provide a fixed-dollar deductible—for example, $100 million—for all homeowners, with the federal government as the backstop provider, paying a deductible that was a multiple of the amount that the state put up. Proponents of this plan point out that it is market-based, designed to involve the private sector, and if risk-based premiums are required is not a “government relief program.” Plan supporters also point out that the option protects the tax base of a state’s economy as well as the creditworthiness of a state’s bond rating. One possible advantage of this policy option for the consumer is that the premiums paid from property taxes are intended to be tax deductible. Moreover, paying the premium from property taxes could increase participation at the state level and create a broad-based program that would limit adverse selection and moral hazard. Finally, maintaining higher deductibles could result in lower insurance premiums. However, this plan also has its disadvantages. Paying the premium from homeowner property taxes collected by the state would reduce federal tax revenues, and, if a disaster occurred, the federal government would have to pay some portion of the deductible. Like the other tax-related options, this option could reduce federal tax revenue if the new deduction were not offset by savings from the elimination of preevent premium subsidies or postevent disaster relief. As a result, it is not clear whether this option may or may not be the most cost-effective for the federal government. Also, using property taxes to pay insurance premiums might diminish the effectiveness of using the price of insurance as a signal of the risk of living in a particular location. One critic has argued that allowing homeowners to deduct the premium portion of the property taxes combined with the federal deductible could result in a double federal subsidy. Finally, this policy option would raise homeowners property taxes, potentially creating homeowner resistance to the assessment. We provided a draft of this report to NAIC for comment and provided excerpts from the draft to Alabama Beach Pool, the CEA, FCIC, FHCF, Florida Citizens, FHCF, the GUA, HUD, Louisiana Citizens, Mississippi Windpool, the North Carolina Beach Plan, SBA, the South Carolina Windpool, and the Texas Windpool. NAIC provided written comments that are reprinted in appendix III. In these comments, NAIC officials said that our draft report was thorough, and that they were pleased that we outlined the advantages and disadvantages of several proposals rather than favoring a single outcome. NAIC officials suggested that we also include in this report two recently proposed options, including one that includes an allocation system for determining what portion of hurricane damages should be attributed to wind and what portion to flooding and the creation of a federal entity to oversee property insurance rates in the coastal zone. While there are interesting features to both options, they were too recent to be included in our review and analysis. However, we will explore both options during the course of our ongoing work involving NFIP. NAIC officials also commented on the language in the draft report discussing allegations made by some critics of state rate regulation who suggest that state regulators may be suppressing rates for some catastrophe insurers. As these officials pointed out, the allegations in this report are attributed to others and are not presented as our position. We recognize the challenges involved in ensuring that consumers are charged appropriate premiums that reflect their risk of exposure to natural catastrophes. Given that premium rates requested are based on a variety of factors that involve a certain amount of judgment—including anticipated losses on claims and related expenses; the need to build a surplus; and other factors, including profit—the rate-setting process is open to interpretation and some amount of negotiation. That is, reasonable but different assumptions about the probability of future losses can result in substantial disagreements about rates. However, if state regulators and the insurance markets consistently have divergent opinions about the cost of the risk exposures, the implications can be far-reaching. As we discuss in this report, for state natural catastrophe insurance programs, if premium rates determined by state insurance regulators consistently result in financial resources that are inadequate to pay policyholder claims after a disaster, postfunding mechanisms must be used to pay shortfalls. Postfunding can result in costs to the private insurance market and may mean that taxpayers in low-risk areas are subsidizing the costs of those living in high-risk areas. Similarly, a pattern of regulator-approved rates for private insurance companies that are consistently below what the market believes to be the true risk rate may result in the withdrawal of healthy, diversified insurance companies from the market. However, if premium rates are set at a level reflecting the market’s perception of the true risk rate, more competitors are likely to enter. Alabama Beach Pool, the CEA, FCIC, FEMA, Florida Citizens, FHCF, the GUA, Louisiana Citizens, Mississippi Windpool, the North Carolina Beach Plan, SBA, the South Carolina Windpool, and the Texas Windpool provided technical comments that we incorporated in this report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days from the date of this letter. At that time, we will provide copies to interested congressional committees; the Chairman and Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; and the Chairman of the House Committee on Financial Services. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or williamso@gao.gov if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives in this report were to examine (1) the rationale and funding of the federal and state programs that have supplemented, or substituted for, private natural catastrophe insurance; (2) the extent to which Americans living in areas of the United States that are at high risk for natural catastrophes are uninsured and underinsured, and the types and amounts of federal payments to such individuals since Hurricanes Katrina, Rita, and Wilma; and (3) public policy options for revising the federal role in natural catastrophe insurance markets. We reviewed or analyzed documents on federal and state natural catastrophe insurance programs, the numbers of uninsured and underinsured and federal payments to them, options to redefine the federal role in natural catastrophe insurance, and principles on which change options can be based and evaluated. We interviewed officials from public interest groups, insurance companies, reinsurance companies, insurance brokers, insurance and reinsurance associations, insurance agents and their associations, state catastrophe insurance plans, state insurance departments, federal catastrophe insurance agencies, the Department of Housing and Urban Development (HUD), the Small Business Administration (SBA), Fannie Mae, Freddie Mac, rating agencies, a risk modeling organization, academia, law firms, a hedge fund, a private research organization, consumer groups, and others. To determine the mechanisms governments use to supplement or substitute for private catastrophe insurance markets, we collected oral and documentary information from public and private officials in various states with high and low catastrophe risk and in Washington, D.C. We sourced financial data for government natural catastrophe insurance programs from financial statements, bond offering documents, and other similar financial documents. To determine the number of uninsured and underinsured Americans and payments made to such individuals after the 2005 hurricanes, we collected information from states, examined federal agency data, interviewed federal officials who prepared these data, sought information from the private sector, and interviewed state officials responsible for disbursing federal disaster funds. We focused our analysis on the federal disaster assistance to homeowners and renters who lacked adequate insurance in the five Gulf Coast states directly impacted by Hurricanes Katrina, Rita, and Wilma. These five states are Alabama, Florida, Louisiana, Mississippi, and Texas. Data on the numbers and amounts of money disbursed to the uninsured and underinsured were incomplete and had a number of limitations. For instance, because we often could not separate payments to homeowners versus payments to renters, we generally included the entire amount in our analysis. Also, we generally excluded administrative and other expenses that federal disaster assistance programs incur in distributing assistance. Our analysis was limited to the major federal disaster assistance programs that we identified as providing relief to homeowners and renters. These programs are the Federal Emergency Management Agency’s (FEMA) Individuals and Households Program (IHP), SBA’s Disaster Loan Program (DLP), and HUD’s Community Development Block Grant (CDBG) program. Our identification of relevant federal disaster assistance programs may be incomplete. Other federal agencies are involved in federal disaster assistance according to the mission assignment issued and approved by FEMA, as we reported separately in Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes, GAO-06- 834 (Washington, D.C.: Sept. 6, 2006). To determine the amount of federal disaster assistance appropriated by Congress to FEMA and the amount paid to homeowners and renters who lacked adequate insurance through FEMA IHP, we obtained and analyzed data provided by FEMA officials describing the funds obligated for the subcategories of Housing Assistance, Other Needs Assistance, and Manufactured Housing in Alabama, Florida, Louisiana, Mississippi, and Texas following Hurricanes Katrina, Rita, and Wilma. In analyzing these data, we had to make certain judgments in deciding which specific subcategories of funds to include in our analysis. In particular, FEMA noted that the Other Needs Assistance data contained funds for services that would not be provided by personal property coverage in standard private homeowners insurance, such as medical and funeral expenses. However, we included Other Needs Assistance data in our analysis because these are expenses that may have been covered by other types of insurance, such as health and life, and, therefore, still provide a reasonable approximation of insurance coverage. Also, FEMA officials noted that the Manufactured Housing data included expenses that would not be included in additional living expenses coverage provided by standard private homeowners insurance. For example, other expenses included unit purchase, haul/install, utilities, site lease, maintenance, deactivation, and the transition out of service. We included these data in our analysis because they are designed to serve a similar purpose as the additional living expenses coverage provided by insurance companies. We assessed the reliability of the data provided by agency officials by interviewing agency officials knowledgeable about the data systems; obtaining oral responses from the agency; and reviewing agency reports regarding (1) the agency’s methods of data collection and quality control reviews, (2) practices and controls over data entry accuracy, and (3) any limitations of the data. It is possible that FEMA’s data analysis methodology is different from that employed by the other agencies we reviewed. Nevertheless, we determined that these data were sufficiently reliable for the purposes of our engagement. Finally, we interviewed officials from FEMA Disaster Assistance Directorate, which administers IHP, and reviewed the document entitled Oversight of Gulf Coast Hurricane Recovery, A Semiannual Report to Congress, October 1, 2006-March 31, 2007, by the President’s Council on Integrity and Efficiency and the Executive Council on Integrity and Efficiency. To determine the amount of federal disaster assistance appropriated by Congress to SBA and the amount paid to homeowners and renters who lacked adequate insurance through SBA DLP, we reviewed the previously mentioned document entitled Oversight of Gulf Coast Hurricane Recovery, and interviewed agency officials. We obtained and analyzed data provided by SBA that included, among other things, the amount of loan funds approved net of other federal disaster assistance and insurance proceeds to loan recipients. We multiplied this total by the subsidy rate of the loans—14.64 percent in 2006. That is, for every $100 that SBA lends, the cost to the federal government is $14.64. The subsidy rate is roughly the percentage of loan principal that is not repaid as well as the difference between the market interest rate and the rate charged by SBA. We believe that subsidy cost is the most accurate representation of the amounts made available and paid to homeowners and renters because the loans under DLP must be repaid by recipients at a subsidized interest rate. We assessed the reliability of the data provided by agency officials by interviewing agency officials knowledgeable about the data systems and obtaining from the agency written responses regarding (1) the agency’s methods of data collection and quality control reviews, (2) practices and controls over data entry accuracy, and (3) any limitations of the data. It is possible that SBA’s data analysis methodology is inconsistent with that employed by the other agencies we reviewed. Nevertheless, we determined that these data were sufficiently reliable for the purposes of our engagement. To determine the amount of federal disaster assistance appropriated and paid to homeowners and renters who lacked adequate insurance through the HUD CDBG program, we interviewed agency officials and reviewed the previously mentioned document entitled Oversight of Gulf Coast Hurricane Recovery. We obtained publicly available data from HUD and each of the five Gulf Coast states that received emergency CDBG appropriations. We reviewed GAO testimony on Gulf Coast rebuilding that described the CDBG programs established in the Gulf Coast states. Congress approved emergency appropriations for HUD CDBG in two installments: $11.5 billion in December 2005 and $5.2 billion in June 2006, for a total appropriation of $16.7 billion. Our goal was to determine what portion of the total appropriation was intended for homeowners in the five Gulf States. We made certain judgments in deciding whether particular subcategories of funds applied to our calculations for each state. It is possible that we did not identify all of the relevant funds. For Florida, we used the Florida Department of Community Affairs, 2005 Disaster Recovery Initiative Action Plan (Apr. 14, 2006) and 2006 Disaster Program Action Plan (Dec. 19, 2006). HUD designated for Florida $82.9 million of the original $11.5 billion included in the December 2005 emergency appropriation. Florida’s action plan calls for the funds to be distributed through entitlement communities, nonentitlement communities, and federally recognized Indian tribes. Grant recipients are required to use at least 70 percent of the funds for the provision of affordable housing. Therefore, approximately $58 million of the Florida CDBG grants will be allocated to the provision of affordable housing. In addition, the June 2006 emergency appropriation included $5.2 billion to the CDBG program, and, on August 18, 2006, HUD made $100,066,518 available to Florida for repair, rehabilitation, and reconstruction of affordable rental housing, and for the unmet needs of evacuees who were forced from their homes and are now living in other states. The entire amount has been made available for mitigation programs through the My Safe Florida Home Program and other programs. For Alabama, we interviewed officials from the Alabama Department of Economic and Community Affairs (DECA). We obtained and analyzed information from DECA officials regarding the plan for distribution of HUD CDBG disaster recovery funds. We learned that DECA determined to make $14,460,588 available for unmet housing needs. In addition, on August 18, 2006, HUD made $21,225,574 available to Alabama for repair, rehabilitation, and reconstruction of affordable rental housing, and for the unmet needs of evacuees who were forced from their homes and are now living in other states. Of this amount, $16,964,296 has been made available for Disaster Relief, Recovery and Restoration of Housing and Infrastructure, and Affordable Rental Housing. For Mississippi, we used the Mississippi Development Authority, Homeowner Assistance Program Partial Action Plan (Mar. 31, 2006). Mississippi’s partial action plan made $3 billion available for the Homeowner Grant Assistance Program, which is for people who owned homes located outside of the federally designated flood zone, yet still suffered structural flood damage caused by Hurricane Katrina. In addition, on August 18, 2006, HUD made $423,036,059 available to Mississippi for repair, rehabilitation, and reconstruction of affordable rental housing, and for the unmet needs of evacuees who were forced from their homes and are now living in other states. For Louisiana, we obtained the Louisiana Recovery Authority, The Road Home Housing Programs, Action Plan for the Use of Disaster Recovery Funds (May 11, 2006) and the Louisiana Recovery Authority, Proposed Action Plan for the Use of Disaster Recovery Funds Allocated by P.L. 109- 234 (May 16, 2007). Louisiana made $3,551,600,000 available to the Road Home Program, which is intended to help owner-occupants repair or rebuild their homes, buy or build replacement homes, or sell unwanted properties so that they can be redeveloped or converted to open space. In addition, on July 11, 2006, HUD allocated $4.2 billion to Louisiana for the Road Home Program. Louisiana designated $2,496,150,000 of this funding as assistance to owner-occupants to compensate them for their hurricane loss. For Texas, we used the State of Texas Action Plan for CDBG Disaster Recovery Grantees under the Department of Defense Appropriations Act, 2006 (Apr. 13, 2006, and May 9, 2006) and the Proposed Partial Texas Action Plan for Disaster Recovery to Use Community Development Block Grant (CDBG) Funding to Assist with the Recovery of Distressed Areas Related to the Consequences of Hurricanes Katrina, Rita, and Wilma in the Gulf of Mexico in 2005 (Dec. 15, 2006). Texas’ action plan made $38,938,268 available for its “Minimum Housing Need Allocation.” In addition, on August 18, 2006, HUD made $428,671,849 available to Texas for repair, rehabilitation, and reconstruction of affordable rental housing, and for the unmet needs of evacuees who were forced from their homes and are now living in other states. Of this amount, $305,238,257 has been made available for a Homeowner Assistance Program, Sabine Pass Restoration Program, and Rental Housing Stock Restoration Program. We identified various options for altering the role of the federal government in catastrophe insurance by looking at bills before the current and previous Congresses as well as other change options that were not in current legislative proposals—for example, a proposal before a committee of the National Association of Insurance Commissioners (NAIC). We sought out advantages of these options from their supporters and disadvantages from critics. We also developed a four-goal framework, on the basis of challenges faced by current government natural catastrophe insurance programs, to analyze current options for an increased federal role in natural catastrophe insurance. We developed these goals by drawing insights from the following: past GAO work, legislative histories of laws that changed the roles of state governments and the federal government after disasters, bills before the current and previous Congresses, interviews with public and private sector officials, and articles written by experts in insurance economics. Although we identified numerous possible goals that could assist our analysis, we believe the four goals we chose accurately capture the essential concerns of the federal government. The scope of our work covered hurricane and earthquake perils—we did not investigate tornado, hail, or other perils. Also, we focused on the property and casualty insurance line—especially homeowners insurance. We did fieldwork in Alabama; California; Connecticut; Florida; Illinois; Indiana; Louisiana; Massachusetts; Mississippi; Missouri; New Jersey; New York; Ohio; Texas; and Washington, D.C. Our work was conducted between March 2006 and October 2007 according to generally accepted government auditing standards. State government natural catastrophe insurance programs, in most cases, have been created after disasters because homeowners insurance coverage for catastrophic events is often not available from private insurers at prices deemed affordable by state legislators and insurance regulators. These programs supplement or substitute for private natural catastrophe insurance. For example, California created an earthquake fund in 1994 when private insurers stopped writing homeowner earthquake coverage following the Northridge Earthquake. Likewise, Florida created Citizens Property Insurance Corporation (Florida Citizens)—the largest home insurer in Florida—to provide state-backed insurance coverage, including for wind damage, for homeowners who cannot get coverage in the private sector. State natural catastrophe insurance programs differ in their details, including the percentage of homeowners covered, geographic locations covered, coverage limits, deductible levels, how the premiums are calculated, losses, and other details. The natural catastrophe insurance programs in California, Florida, and other states are funded through a combination of premium payments and postevent assessments and bonds. Particularly in catastrophe-prone locations, government insurance programs have tended not to charge premiums that reflect the actual risks that homeowners face, resulting in financial deficits. After the 2005 hurricanes, for example, some of these state programs faced large accumulated deficits and required substantial public funding to continue operations. See figure 5 for a comparison of the features of selected state natural catastrophe insurance programs, especially their losses, after the 2005 hurricanes. The text that follows figure 5 contains the most recent information on the state programs. The California Earthquake Authority (CEA) is an instrumentality of the state that sells earthquake insurance policies for residential property throughout California. Most standard homeowners insurance policies do not cover earthquake damage. However, California law requires insurers that sell residential property insurance in California to offer earthquake coverage to their policyholders every 2 years. In offering earthquake coverage, insurance companies can manage the risk themselves or they can become a CEA-participating insurance company and offer the CEA’s residential earthquake policies. The CEA is managed by a Governing Board composed of the Governor, Treasurer, and Insurance Commissioner. An 11- member Advisory Panel advises the board. The base CEA policy, known as a “minipolicy,” is a reduced-coverage, catastrophic earthquake insurance policy intended to protect a dwelling, while excluding coverage for costly nonessential items, such as swimming pools, patios, and detached structures. Dwelling coverage will help pay to repair or (up to the policy limit) replace an insured home when structural damage exceeds the policy deductible. Coverage for fire is not included; fire is covered in the companion homeowners insurance policy. The dwelling coverage limit is determined by the insured value of the home, as stated on the companion homeowners insurance policy. Personal property coverage provides up to $5,000 to replace items, including furniture, televisions, audio and video equipment, household appliances, bedding, and clothing. Policyholders can increase their personal property coverage to as much as $100,000. The CEA policy provides $1,500 of Additional Living Expense coverage to pay for necessary increases in living expenses incurred to maintain a normal standard of living. Policyholders can increase that coverage to as much as $15,000. In addition to providing funds for repairing or replacing a home, the CEA base policy includes an additional $10,000 in Building Code Upgrade coverage. For policies that renew or become effective on or after July 1, 2006, policyholders can choose to increase Building Code Upgrade coverage by an additional $10,000, for a total Building Code Upgrade coverage limit of $20,000. The CEA policy offers two deductible options: the standard base-limit deductible of 15 percent of the policy of the total coverage or a 10 percent deductible option. Damage to personal property is not covered, unless the dwelling deductible is met. There is no deductible for Additional Living Expense/Loss of Use coverage. CEA coverage is available to homeowners only from the insurance company that provides their residential property insurance and only if that company is a CEA-participating insurance company. Participating insurance companies process all CEA policy applications, policy renewals, invoices, and payments and handle all CEA claims. The Northridge Earthquake jolted the San Fernando Valley in January 1994. It caused 57 deaths and an estimated $49.3 billion in economic losses. California insurers had collected only $3.4 billion in earthquake premiums in the 25-year period prior to the Northridge Earthquake and paid out more than $15 billion on Northridge claims alone. In January 1995, insurers representing about 95 percent of the homeowners insurance market in California began to limit their exposure to earthquakes by writing fewer or no new homeowners insurance policies. This triggered a crisis that by mid- 1996 threatened the vitality of California’s housing market and stalled the state’s recovery from recession. In 1995, California lawmakers passed a bill that allowed insurers to offer a reduced-coverage earthquake insurance policy that became the “minipolicy.” The CEA became operational in December 1996. In determining premium rates, the CEA is required by law to use the best science available and is expressly permitted by law to use earthquake computer modeling, to establish actuarially sound rates. The CEA will examine rating factors, such as the rating territory (determined by ZIP code), age, and type of construction of a home, in determining the premium rate. The CEA applies a 5 percent premium discount to dwellings that meet the following requirements: the dwelling was built before 1979, it is of a wood-frame construction-type, the frame is tied to the foundation, it has cripple walls braced with plywood or its equivalent, and the water heater is secured to the building frame. The CEA governing board establishes premium rates, subject to the prior approval of the Insurance Commissioner. The Governing Board voted to reduce the base policy rates on July 1, 2006, by a statewide average of 22.1 percent resulting in a rate reduction for approximately 85 percent of CEA policyholders. The CEA says that a sharp drop in the cost of reinsurance and several years without a major earthquake, allowing CEA insurers to build up reserves, made the cut possible. While consumer advocates support the cut, some industry experts fear that the lower rates could make the CEA financially vulnerable in the event of a major earthquake. No state funds and no public money are used to finance the CEA. The CEA is funded from policyholder premiums, contributions from and assessments on participating insurers, returns on invested funds, borrowed funds, and reinsurance. Assessments on participating insurers may not be directly passed through to policyholders. The CEA is authorized to issue bonds, and may not cease to exist so long as its bonds are outstanding. As of January 2006, the CEA had a projected total claims-paying capacity of $7.8 billion, but if an earthquake causes insured damage greater that the CEA’s claims-paying capacity, then policyholders affected will be paid a prorated portion of their covered losses. The surplus of the CEA increases each year in which there is no major event. The CEA is one of the world’s largest residential earthquake insurers, with about 755,000 policies and $501.4 million in premiums in 2006. The CEA states that over 8 million households in California have homeowners insurance, and that about 12 percent of these households have earthquake insurance. The CEA states that there would not be enough capacity to support 100 percent participation in the program. There are insurance companies that offer only earthquake coverage and do not write homeowners insurance. Such companies physically select the properties that they will insure. Private insurers accounted for about 30 percent of the earthquake insurance market in California in 2005. Citizens Property Insurance Corporation (Florida Citizens) is a not-for- profit and tax-exempt government entity that provides property insurance for personal, commercial residential, and commercial nonresidential properties when private insurance is unavailable or, in the case of residential insurance, unaffordable. Florida Citizens maintains three accounts: (1) the high-risk account (HRA) provides personal and commercial multiperil and wind-only coverage in certain high-risk coastal areas (“HRA areas”); (2) the personal lines account (PLA) offers personal residential multiperil policies outside of the HRA areas, and ex-wind policies for residential properties inside of the HRA areas; and (3) the commercial lines account (CLA) offers commercial residential and commercial nonresidential multiperil policies outside of the HRA areas, and ex-wind policies inside of the HRA areas. Florida law requires Citizens to maintain the separate accounts until the retirement of bonds issued by Citizens’ predecessors prior to Citizens’ formation. Since these predecessor bonds have been retired, the separate accounts may be combined, but Citizens has made no decision to do so. Policies are sold by independent insurance agents, who receive 6 to 8 percent commissions for residential policies and 7 to 12 percent commission for commercial policies. Underwriting standards are somewhat limited, as the company is intended to be an insurer of last resort. Hurricane deductibles are offered at $500, 2 percent, 5 percent, and 10 percent for personal lines multiperil policies and at $500, 2 percent, 3 percent, 4 percent, 5 percent, and 10 percent for personal lines wind-only policies. All-other-peril deductibles are $500, $1,000, and $2,500. Coverage limits for homeowners policies must be at least equal to 100 percent of the estimated replacement value. Florida Citizens offers premium discounts up to 45 percent to homeowners who take qualifying mitigation measures. Florida Citizens imposes a surcharge on older homes that reaches a maximum of 20 percent for homes over 40 years old, while policyholders with newer homes can receive a premium credit of up to 10 percent. Florida Citizens was established in 2002 after two separate insurance pools, known as the Florida Windstorm Underwriting Association (FWUA) and the Florida Residential Property and Casualty Joint Underwriting Association (JUA) were combined. The FWUA was created by statute in 1970 to provide high-risk, windstorm and hail residual market coverage in selected areas of Florida. Florida Citizens’ HRA assumed the debt and obligations of the FWUA. The JUA was created in December 1992, in the wake of the capacity crisis following Hurricane Andrew, to provide residual market residential-property multiperil insurance coverage, excluding wind if the property was within FWUA-eligible areas. Florida Citizens’ PLA and CLA assumed the debt and obligations of the JUA. A primary driver for the merger was that the combined entity obtained federally tax-exempt status, thus saving federal income taxes that otherwise would have been paid by the FWUA and the JUA. In addition, as a tax-exempt entity, Florida Citizens is able to issue lower coupon tax-free bonds postevent, as well as taxable preevent bonds. The merger also resulted in some overhead cost savings by having a single organization. Until recently, Florida Citizens’ premium rates were required to be noncompetitive with the voluntary market, using a formula that determined rates on a county-by-county basis, on the basis of the highest rate offered in the voluntary market among the state’s top 20 insurers writing in that area. Then, as part of legislation passed in May 2006, Florida Citizens’ rates were required to be high enough to purchase reinsurance to cover 1-in-100-year hurricane probable maximum loss in the PLA and 1-in-70-year hurricane probable maximum loss in the CLA. Finally, in January 2007, legislation was passed that eliminated both of these requirements and required that Florida Citizens’ rates be actuarially sound and not excessive, inadequate, or unfairly discriminatory. In addition, the legislation rescinded a rate increase that took effect on January 1, 2007; froze 2007 rates at the December 31, 2006, rate level; and required Florida Citizens to make a new rate filing to be effective January 1, 2009. Storms in 2004 and 2005 resulted in more than $30 billion in insured damage in Florida. Florida Citizens sustained deficits of $515 million in 2004 and $1.8 billion in 2005. To fund its deficit, Florida Citizens is required by statute to assess admitted insurers in proportion to the amount of property and casualty insurance business (except for workers’ compensation or accident and health) they write in Florida, and also to assess its own policyholders and surplus lines policyholders. The admitted insurers have the ability to recoup regular assessments from their policyholders upon renewal of a policy or issuance of a new policy. If the amount of the deficit exceeds the amount Florida Citizens can collect as a regular assessment, it is required to levy emergency assessments on its own policyholders, on surplus lines policyholders, and on the policyholders of admitted insurers. Admitted insurers collect emergency assessments from their policyholders and remit the collections to Citizens. To fund its 2004 deficit, Florida Citizens assessed insurance companies and surplus lines insureds over $515 million in regular assessments. To fund the 2005 deficit of approximately $1.8 billion, the Florida Legislature appropriated $715 million from the Florida general revenue fund, which reduced the size of the regular assessment from $878 million to $163 million. The regular assessment imposed to fund the 2005 deficit was reduced from an estimated 11.2 percent to 2.07 percent due to the infusion of general revenue funds. The Florida Legislature also directed Florida Citizens to amortize the collection of the emergency assessment for the remaining $888 million deficit over a 10-year period, resulting in a 1.4 percent emergency assessment levied beginning in June 2007. Florida Citizens’ resources also come from its reinsurance arrangement with the Florida Hurricane Catastrophe Fund (FHCF). In 2006, the FHCF provided coverage for Florida Citizens for 90 percent of $4 billion in losses above its deductible. As a tax-exempt entity, Florida Citizens is able to issue tax-exempt postevent bonds as well as taxable preevent bonds. The tax-exempt status is beneficial because in the event of a major disaster, Florida Citizens can finance loss payment by issuing bonds that carry low interest rates, thereby reducing financing costs over the years by hundreds of millions of dollars. In June 2006, Florida Citizens completed a $3.05 billion taxable pre-event bond sale. In February 2007, Florida Citizens closed a $1 billion tax-exempt postevent bond issuance. In June 2007, Citizens completed a $1.95 billion preevent financing plan consisting of a $1 billion line of credit and $950 million in bonds. Under its enabling statute, Florida Citizens is a government entity and not a private insurance company. As long as Florida Citizens has bonds outstanding, it may not file a voluntary petition under Chapter 9 of the Federal Bankruptcy Code. Except for 2 years, Florida Citizens’ growth has been relatively moderate given the market dynamics. Since its establishment in 2002, when it had 658,085 policies, the policy count has increased to 1.38 million policies-in- force as of September 30, 2007. Over this 5-year period, there has been nominal growth in Citizens’ formerly wind-only HRA of 10 percent. Most of the growth of Florida Citizens has been in its PLA and has been caused by the following factors: (1) the private market pulling back following eight hurricane events in 2004 and 2005, (2) private insurers’ curtailing coverage in sinkhole parts of the state, and (3) the July 2006 assumption by Florida Citizens of approximately 300,000 policies following the insolvency of a private insurer. Aside from 2006, the only other significant policy increase occurred in 2003, a 25 percent increase. By comparison, total policies grew by only 7 percent in 2004, decreased by 7 percent in 2005, and grew by 7 percent for the first 9 months of 2007—all net of depopulation activities. Prior to 2007, part of Florida Citizens’ lower growth rate was the result of incentives to private insurers to take policies out of Citizens, also known as depopulation incentives. Florida Citizens had the authority to pay insurers a take-out bonus of up to 12.5 percent of premiums removed from the HRA. The incentive program required that a minimum of 25,000 policies or a total insured value of at least $5 billion be removed. Insurers could earn higher bonuses, up to 10 percent, for assuming more than the minimum. They were required to retain the policies for either 3 or 5 years. Take-out incentives were eliminated in 2007. Nonetheless, through August 31, 2007, 131,000 policies have been removed without incentives. The Louisiana Citizens Property Insurance Corporation (Louisiana Citizens) is a nonprofit, tax-exempt entity that acts as a market of last resort for residential and commercial property insurance in Louisiana. Louisiana Citizens is modeled on a similar Citizens Plan created in Florida. Louisiana Citizens was specifically organized to operate the state's Coastal Plan and Fair Access to Insurance Requirements (FAIR) Plan. The Coastal Plan offers coverage in coastal areas of the state. The FAIR Plan offers coverage in the rest of the state. Louisiana Citizens offers coverage for fire, vandalism, windstorm, hail, and homeowners. Residential policy limits are up to $750,000 for property, and up to $375,000 for contents. Policy deductibles are offered at various levels, with 2 and 5 percent offered for wind/hail coverage. Underwriting standards are somewhat limited since the company is intended to be an insurer of last resort. A 15-member governing board supervises company operations. The company has very limited infrastructure in place, as it maintains an administrative services contract with the Property Insurance Association of Louisiana, a nonprofit organization of licensed insurance carriers in the state. The company also entered into agreements for underwriting, policy management, and claims management services with three service providers. Louisiana created the Louisiana Joint Reinsurance Plan (the predecessor to the FAIR Plan) in 1968 to provide a residual market for property insurance in inner cities within the state in response to damage caused by civil unrest. The state created the Louisiana Insurance Underwriting Plan (the predecessor to the Coastal Plan) in 1970 to provide a residual market for property insurance in coastal areas of the state in response to damage caused by Hurricane Camille. All insurers licensed to write property insurance in the state were required to participate in the predecessor insurance plans. Property losses caused by hailstorms, Hurricane Lili, and Tropical Storm Isidore resulted in assessments against the participating insurers that were not recoverable from policyholders. The insurers became reluctant to write insurance in the state. The legislation creating Louisiana Citizens gave participating insurers the ability to recoup a regular assessment from policyholders and gave Louisiana Citizens the ability to impose emergency assessments directly on policyholders. Louisiana Citizens premium rates are required to be actuarially sound. Premium rates are not intended to be competitive with the private market and are set at least 10 percent above the average rate of the insurer that had the highest rate of the top 10 insurers by parish, provided they make up at least 3 percent of the market. In 2005, Louisiana Citizens suffered more than $1 billion in losses from Hurricanes Katrina and Rita, with the vast majority of the losses in the FAIR Plan. Citizens had not built up sufficient reserves to meet its obligations. It had only $80 million in cash reserves and tapped into its reinsurance for an additional $295 million. In October 2005, because there was still a deficit, Citizens assessed all property insurance companies in the state a one-time regular assessment of a maximum amount of 15 percent of premium: 10 percent for the FAIR Plan and 5 percent for the Coastal Plan. Insurers recoup the amount of their regular assessments from their policyholders in the subsequent year. The regular assessment following the 2005 hurricanes generated approximately $200 million for Louisiana Citizens. Because a deficit situation still existed after the regular assessment was levied, Citizens was authorized by law to issue bonds. In December of 2006, Citizens received approval from the State Bond Commission to issue up to $1.4 billion of tax-exempt revenue bonds. The actual bond issue in April 2006 was for approximately $978 million. The bond offering will be financed by an emergency assessment on policyholders that is estimated to be about 5 to 6 percent of insurance premiums per year for as many years as needed to cover the plan deficit. The Mississippi Windstorm Underwriting Association (Mississippi Windpool) is a nonprofit association of all insurance companies writing property insurance in Mississippi on a direct basis. It was established by the legislature to provide an adequate market for windstorm and hail insurance in the six coastal counties of Mississippi. The maximum coverage available for residential coverage is $1 million for the dwelling and $250,000 for contents. The policy contains a “named storm” deductible of 2 percent of the insured value of the home with a $500 minimum or, if coverage exceeds $500,000, a $1,000 minimum. Any structure built after June 1, 1987, in an area that has not adopted the standard building code must produce proof that the structure is built in substantial accordance with the code. Otherwise, it is not insurable by the Mississippi Windpool. Policies can be sold by any approved insurer, and agents receive a 15 percent commission on new business and a 10 percent commission on renewals. The Mississippi Insurance Underwriting Association (MIUA), the predecessor to the Mississippi Windpool, was created by the Mississippi Legislature in 1970 as the state struggled to recover from Hurricane Camille in 1969. The MIUA provided fire and windstorm coverage in the six coastal counties of the state, and its basic purpose was to enable individuals to secure a mortgage since there was no private market for fire and wind coverage. In 1987, the legislature found that the market for fire coverage had recovered, but there remained the need for residual windstorm coverage. Thus, the legislature created the Mississippi Windpool. Mississippi Windpool premium rates are required to be nondiscriminatory as to the same class of risk, and are subject to the approval of the state insurance commissioner. Prior to 2007, in light of agent commissions and the servicing carrier agreement, the Mississippi Windpool recovered just over 75 percent of premium payment on new business and about 82 percent of premium payment on renewals. The Mississippi Windpool used most of the premium to buy reinsurance. In 2004, the Windpool sought to raise premium rates by 76 percent, but the insurance commissioner approved a 22 percent increase. In April 2006, the Mississippi Windpool sought approval of a 397.8 percent rate increase for residential coverage. Much of the increase was needed to buy adequate reinsurance, which cost the Mississippi Windpool about $0.65 to $0.70 per dollar of reinsurance. The state insurance commissioner granted a 90 percent increase. To defray the cost of reinsurance for the Mississippi Windpool, the state requested that HUD allow it to allocate up to $50 million in CDBG funds. HUD gave the state permission to use $30 million in 2006 and $20 million in 2007 for the Mississippi Windpool reinsurance. In 2007, the State Legislature created the Mississippi Windstorm Underwriting Association Reinsurance Assistance Fund for the purpose of defraying the cost of Mississippi Windpool reinsurance. The fund will be financed using state tax dollars and may be used only by the state insurance department upon appropriation by the State Legislature. The Mississippi Windpool was also granted authority to issue bonds and other debt instruments. Following Hurricane Katrina, the Mississippi Windpool suffered about $720 million in losses. It received loss claims from every policyholder, about 18,000 total. About 700 to 800 policyholders did not meet their deductible, but the vast majority of claims resulted in payment. Prior to 2007, the Mississippi Windpool did not retain profits, if any, at year-end, but distributed them to participating insurers. The Mississippi Windpool did not have adequate reinsurance and capital to cover their losses in Katrina, and assessed participating insurers $525 million, on the basis of their level of participation in the state property insurance market. About 400 companies were participating in the Mississippi Windpool at the time. Each could reduce their assessment liability by $1.40 for every $1.00 of voluntarily written premium for property insurance in the coastal area. This credited amount came out of the second 90 percent of the assessment; all members were responsible for the first 10 percent of the assessment. No company completely wrote themselves out of the Katrina assessment, and some companies incurred greater direct losses than their assessment. However, some companies were assessed more than their entire direct written premium received in the state. Legislation passed in 2007 requires the state insurance commissioner to levy a surcharge on all property insurance premiums in the state to recover within 1 year the amount of the regular assessment for reimbursement to assessable insurers who paid the regular assessment. The Texas Windstorm Insurance Association (Texas Windpool) offers windstorm and hail coverage for residential and commercial properties in 14 coastal counties and parts of Harris County (but not Houston). About 25 percent of the state’s population lives along the coast. The membership of the Texas Windpool includes every property insurer licensed to write property insurance in the state. Each company’s percentage of participation is based on their statewide sales. The Texas Windpool is governed by a nine-member board of directors. Coverage limits are adjusted annually to reflect inflation. Effective January 1, 2007, residential coverage for a dwelling and its contents is capped at $1.597 million. Policies include coverage for wind-driven rain, loss of use, and consequential losses. Since 2004, the Texas Windpool has required that residential properties that it insures conform to the International Residential Code. However, under certain conditions, the Texas Windpool will insure homes built before 1988 that were not built according to any recognized building code. Policies are sold by individual licensed agents who receive 16 percent of gross written premium as commission. Hurricane Celia struck the Texas coast in August 1970 and caused an estimated $310 million in insured losses ($1.55 billion in 2005 dollars). Many insurers decided to stop writing business in the state’s coastal communities. In response, the State Legislature created the Texas Catastrophe Property Insurance Association (predecessor to the Texas Windpool) in 1971. The Texas Windpool must file all rates with the state insurance commissioner for approval. The commissioner assesses whether the rates are reasonable, adequate, not unfairly discriminatory, and nonconfiscatory as to any class of insurer. Approved rates must be uniform throughout the 14 coastal counties. By law, the Texas Windpool residential premium rates may not increase by more than 10 percent above the rate for noncommercial windstorm or hail insurance in effect at the time of filing, but the insurance commissioner may suspend this rule after a catastrophe or series of catastrophes to ensure rate adequacy in the catastrophe area. In May 2006, the Texas Windpool sought a 19 percent residential and 24 percent commercial rate increase. The insurance commissioner approved a 3.1 percent residential and 8 percent rate increase. Again, in November 2006, the Texas Windpool sought a 20 percent residential and 22 percent commercial rate increase. The insurance commissioner approved a rate increase of 4.2 percent for residential policies and 3.7 percent for commercial policies. The Texas Windpool is authorized to assess participating insurers for excess losses. In addition, the State Legislature created the Catastrophe Reserve Trust Fund, into which Texas Windpool profits are deposited, rather than distributed to participating insurers. Under the plan, companies are assessed the first $100 million losses in excess of the Texas Windpool’s premiums and other income. Losses in excess of this amount are funded by private reinsurance and the trust fund. An additional $200 million assessment can be levied if private reinsurance and the trust fund are inadequate to cover losses. In March 2006, the Texas Windpool had the ability to fund $1.3 billion in excess losses based on a combination of assessments, reinsurance, and other means. Losses in excess of $1.3 billion are funded through further industry assessments. An insurer may credit the amount paid under this top-layer assessment against its premium tax. Hurricane Rita produced estimated losses of between $160 million and $165 million for the Texas Windpool. The payment of some 11,506 Hurricane Rita claims in 2005 resulted in a deficit and a $100 million assessment on insurance companies. The pool grew from almost 69,000 policyholders at the end of 2001 to about 207,000 at the end of September 2007. Texas Windpool liability, or exposure to loss, was about $56 billion as of the end of September 2007. Alabama Insurance Underwriting Association (Alabama Beach Pool) is a voluntary unincorporated nonprofit association established to provide essential residential and commercial insurance coverage to the beach area counties of Baldwin and Mobile. Twelve percent of Alabamans live on the coast. Every licensed property insurer in the state is a member of the Alabama Beach Pool. The Beach Pool offers two types of policies: fire and extended coverage, and wind and hail. The Beach Pool offers coverage limits on residential buildings up to a maximum of $500,000, combined dwelling and contents. A hurricane deductible of 5 percent ($1,000 minimum) is applicable in the event of a named storm. Policies covering property located in certain areas may opt for a 2 percent hurricane deductible for an additional premium. The standard deductible for all other perils is $500. Buildings must conform to the Southern Standard Building Code for the Alabama Beach Pool to provide coverage. Any insurance agent licensed in Alabama can sell Beach Pool policies and receive an 8 percent commission. The Beach Pool is managed by a board of directors. The Alabama Beach Pool was created in the aftermath of Hurricane Camille in 1969. Insurance companies operating in Alabama voluntarily agreed to join the association at the behest of the state insurance commissioner. The Beach Pool was not created by the State Legislature, but is subject to regulation by the Alabama Department of Insurance. The Alabama Beach Pool and other insurers operating in the state must file premium rate change requests with the Alabama Department of Insurance. Alabama is a “prior approval” state, meaning that insurers must either allow a waiting period to expire or receive approval from the insurance department prior to using those rates in pricing insurance coverage. Insurance company officials told us that they are not always able to get their requested rates, and that Alabama Beach Pool rates are too low. Prior to Hurricane Katrina, the state insurance department conducted a study comparing Beach Pool premium rates with rates charged by state-run coastal insurance programs in Florida and Mississippi. The study showed that Alabama Beach Pool rates were higher than the Florida and Mississippi programs. The State Legislature put pressure on the insurance department to lower Beach Pool rates. In the wake of Hurricane Katrina, however, coastal insurance rates in Florida and Mississippi are higher than in Alabama. The Alabama Beach Pool is authorized to make assessments upon all member insurers. The calculation of the assessment is based on the member’s proportion of net direct premiums of property insurance in the state. Members can receive annual credit against assessments for property insurance voluntarily written in the coastal area. In the event of catastrophe loss requiring assessment, a first partial loss assessment will be limited to not exceed $2 million per member insurer. Members may not pass-through assessments to policyholders. The Beach Pool currently has about 8,500 policies, insuring about $1.5 billion in property. The Georgia Underwriting Association (GUA) was created by insurance companies licensed to write property insurance in Georgia to administer the state FAIR Plan. The plan insures homeowners throughout the state who have not been able to find certain types of insurance coverage in the voluntary market, and also coverage against windstorm and hail damage in coastal counties and off-shore islands. The coverage limit for any one building, including the dwelling and its contents, for windstorm and hail coverage is $2 million. The deductible for windstorm and hail coverage is at least 1 percent, subject to a minimum of $500. Any structure in the windstorm and hail area that is less than 10 years old and not built in compliance with the Southern Standard Building Code or its equivalent is not eligible for coverage. Replacement cost and loss of use coverage are available as supplemental coverage. Homeowners may apply for GUA coverage directly or through a state-licensed insurance agent. Agents receive a commission of 10 percent of the premium. Premium rates either may be approved by the state insurance commissioner and must not be excessive, inadequate, or unfairly discriminatory or may be advisory rates and premiums from the Insurance Services Office, Inc. The average premium for coverage is about $590. The GUA maintains reinsurance of $100 million in excess of $50 million, and a second-event limit covers a second loss greater than $25 million. The GUA is authorized to assess member insurers for program losses in proportion to each member’s property insurance premiums written during the most recent calendar year. Member assessments may not be passed-through to policyholders. Member insurers also share in program profits. In June 2006, the GUA had 26,882 policies in-force, of which 7,136 policies were on the coast. The exposure statewide as of June 2006 was $3.2 billion, of which $1.3 billion was coastal exposure. The South Carolina General Assembly authorized the creation of the South Carolina Wind and Hail Underwriting Association (South Carolina Windpool) in 1971. All admitted property and casualty companies licensed by the South Carolina Department of Insurance are members of and are required to participate in the South Carolina Windpool. The Windpool provides wind and hail coverage in the coastal areas of the state, which are specifically designated by statute. The state director of insurance recently expanded the territory eligible for Windpool coverage and divided the territory into two zones. Insurance companies writing policies in the defined territory may either offer wind coverage or exclude wind coverage (for a reduced premium). If an insurer excludes wind coverage, that coverage may be written by the South Carolina Windpool (for an additional premium). Cover limits for one-to-four family dwellings, including mobile homes and condominiums, is $1.3 million. Items that are specifically excluded from coverage include property over water and wind-driven rain. South Carolina Windpool policies are actual cash value contracts. Primary residences are eligible to purchase replacement cost coverage. The standard building/contents deductible is 3 percent of the policy limit in zone 1 with a minimum deductible of 2 percent in zone 2. Loss of use coverage is subject to a time deductible that is based on the underlying building/contents deductible. Policies may be sold by any insurance producer or broker licensed by the state. Premium rates must be approved by the state director of insurance. Premium rate increases or decreases of 7 percent may take effect on a file-and-use basis; rate increases or decreases of more than 7 percent are subject to prior approval. In 2005, the average premium per residential policy for the South Carolina Windpool was $1,385. In 2007, the State Legislature required the Windpool to ensure rate adequacy so as to permit it to be self-sustaining. The South Carolina Windpool is authorized to assess member insurers to cover program losses. Insurers may pass through assessments to policyholders through future rate filings. In June 2007, the State Legislature authorized the South Carolina Windpool to sell bonds and incur debt. The South Carolina Windpool had 36,196 residential policies in-force as of September 30, 2007, compared with 16,430 residential policies in-force in 2001. Windpool exposure was almost $13.735 billion as of September 30, 2007. The North Carolina Insurance Underwriting Association (North Carolina Beach Plan) was created in 1969 to provide insurance coverage to people not able to buy it through the standard insurance market only on the barrier islands adjacent to the Atlantic Ocean. In 1998, the North Carolina General Assembly expanded the Beach Plan to include the state’s 18 coastal counties for windstorm and hail only coverage. A 14-member board of directors acts as the North Carolina Beach Plan policymaking body. All property and casualty insurance companies that do business in North Carolina participate in funding the plan. The North Carolina Beach Plan provides Basic coverage, which includes most major perils, and broad coverage, which includes a broader array of perils. Coverage limits are up to $1.5 million on private dwellings. Coverage is provided on an actual cash value basis or replacement cost if certain specific criteria are met. Policies meeting plan criteria are continuous 1 year policies if premiums are paid. Underwriting standards are somewhat basic since the North Carolina Beach Plan is intended to be an insurer of last resort. North Carolina Beach Plan premium rates must be filed with the state insurance commissioner by the North Carolina Rate Bureau for approval prior to their use. In 2007, homeowner rates were raised by an average of 25 percent for beach and coastal areas. Homeowners wind-only policies were increased 25 percent for beach areas and 38 percent for the coastal areas. In 2006, dwelling extended coverage rates were increased about 25 percent. For commercial property maximum coverage limits are $3 million (combined building and contents) and $300,000 for business income. The Beach Plan adopts Insurance Services Office commercial loss cost filings approved by the state insurance commissioner. All member insurers share in North Carolina Beach Plan expenses, profits, and losses in proportion to their property insurance net direct premium written in the state. Member insurers can receive credit against expenses, profits, and losses for property insurance voluntarily written in the beach and coastal areas. The North Carolina Beach Plan has a “take out” program within its plan of operation; however, to date, this program has not been initiated. As of fiscal year end September 30, 2007, the Beach Plan had over 162,000 policies in-force with an exposure of $64.1 billion, compared with about 88,000 policies and an exposure of $28.9 billion at the end of fiscal year 2004. The goal of the FHCF has been to provide a cost-effective source of reinsurance to residential property insurers in the state. It is structured as a tax-exempt state trust fund under the direction and control of the State Board of Administration of Florida (State Board). The State Board is a constitutional entity of Florida state government. It is governed by a Board of Trustees composed of the Governor, Chief Financial Officer, and Attorney General. The State Board appoints a nine-member advisory council to provide the State Board with information and advice with its administration of the FHCF. The management and day-to-day operations of the FHCF is the responsibility of the Senior Officer. The Senior Officer currently manages eight professional staff. Paragon Strategic Solutions, Inc. is the FHCF administrator as well as the actuarial consultant to the State Board. The FHCF collects premiums from and provides reimbursements to insurers writing residential property and casualty insurance policies within the state. As a condition of doing business in Florida, each insurer writing “covered policies” is required to contract with the FHCF. “Covered policies” means any insurance policy covering residential property in the state that provides wind or hurricane coverage. This includes any such policy written by Florida Citizens. A limited exemption is available for insurance companies with less than $10 million in covered exposure (not premium). The FHCF is obligated, pursuant to reimbursement contracts, to reimburse participating insurers for a specified percentage of qualifying losses on the basis of selected coverage (45, 75, or 90 percent) in excess of loss retention thresholds (or deductibles). Nearly 85 percent of insurers selected the 90 percent coverage option for fiscal year 2005-2006 (July 1, 2005 through June 30, 2006). There were 205 insurance companies that contracted with the FHCF during that period. The aggregate industry deductible is set by law at $4.5 billion to be adjusted to reflect increased exposure to the FHCF. Currently, the aggregate deductible is $6.089 billion for the contract year ending May 31, 2007. An individual insurer’s deductible is based on an insurer’s pro rata share of reimbursement premium due for a contract year and other factors. An insurer’s full deductible shall apply to each of the insurer’s largest two hurricanes. The insurer’s full deductible would then be adjusted to one third for any other hurricanes occurring during the contract year. The insured value of property reinsured by the FHCF in contract year 2007 is estimated to be approximately $2 trillion. The FHCF’s claims-paying capacity in a contract year is set by law, and legislation passed in early 2007 will increase capacity from $15 to $38.4 billion. Due to actual coverage selected, the resulting capacity was $27.83 billion. The FHCF’s multiyear claims-paying capacity is over $50 billion. The cap on capacity represents the limited liability of the FHCF—it is not obligated by contract if losses in a given contract year exceed claims-paying capacity. In contract years where there is growth in the FHCF’s cash balance, the capacity is allowed to increase to the lesser of the growth in the cash balance or the growth in the reported insured property values. The projected payout for a participating company is set as a pro rata share of the FHCF’s annual capacity. Prior to reimbursement, an insurer’s loss reports are examined by the State Board and tested for reasonableness. Limited apportionment companies, which possess capital not exceeding $20 million, are entitled to reimbursement first. No one county is responsible for more than 9.8 percent of the fund’s exposure. Dade, Broward, and Palm Beach Counties are contiguous and make up less than 28 percent of the fund’s total exposure. On August 24, 1992, Hurricane Andrew hit the southern coast of Florida just south of Miami and caused economic damages estimated in excess of $25 billion, including an estimated $15.5 billion in insured losses. The major impacts for primary insurance company buyers of reinsurance in the year following Andrew included a severe shortage of catastrophe property reinsurance capacity and stricter policy terms and conditions, as well as sharp increases in property catastrophe cover rates. The poststorm reaction of a number of insurance companies was to attempt to reduce their underwriting exposure. In early 1993, 39 insurers stated they intended to either cancel or not renew 844,433 policies in Florida. The factors influencing these private insurers included the inability to obtain adequate reinsurance or, when available, the cost for reinsurance was too high; new catastrophe risk models indicated that exposure levels were higher than previously thought, and the exposure levels were disproportionate to company and industry financial resources; significant reductions in insurers’ policyholders surplus; concerns about rate adequacy, especially for coastal counties and certain risk categories, such as condominiums; “hidden” exposures from potential assessments by various other insurance mechanisms, for example, residual markets and catastrophe funds; and fear that unfavorable catastrophe exposure would hurt ratings from agencies such as A.M. Best and Standard & Poor’s. The Department of Insurance (now called the Office of Insurance Regulation) issued a study examining the state of the property insurance market and enumerating many recommendations. Among the recommendations was a proposal (originally suggested and supported by the two largest private insurers in the state—State Farm and Allstate) to establish a tax-free state catastrophe fund to provide reinsurance protection between that provided by the private market and a proposed federal fund. Later, the legislature created a Study Commission on Property Insurance and Reinsurance to look into the viability of the property insurance industry and the adequacy of reinsurance. Of the 40 recommendations made by the commission, a key recommendation included the establishment of a state catastrophe fund “to fill the void between currently available private sector reinsurance and the proposed federal catastrophic fund program.” Virtually all of the recommendations from the commission were enacted with minor alterations, including creation of the FHCF. The cost of FHCF coverage is significantly less than the cost of private reinsurance (one fourth to one third the cost) due to the FHCF’s tax- exempt status, low administrative costs, and lack of a profit or risk-load. The tax-exempt status of the FHCF removes a level of potential income taxation for participating insurers resulting from the annual buildup of contingent reserves in years when there are few or no hurricanes, and, thus, allows for the accumulation of funds for the payment of Florida losses. Another reason FHCF premiums are low is that a significant part of the coverage provided by the FHCF may be paid by long-term debt issued by the FHCF after a large hurricane event occurs, as discussed below. A company’s annual reimbursement premium is based on an actuarial formula that considers property location, type of construction, deductible, and loss mitigation. Premiums have been stable over time due to mandatory participation but have increased significantly since 2004 when the FHCF’s capacity was increased. Growth in reported exposure has also factored into increased premiums. The top 10 insurers in the FHCF contribute 64 percent of the total reimbursement premiums paid. The FHCF is expected to collect $736 million in reimbursement premium during contract year 2006. Beginning in 2006, the FHCF was required to charge a rapid cash build-up factor equal to 25 percent of premiums, which was expected to provide $200 million annually. However, the Florida Legislature repealed this provision in early 2007. According to the FHCF, most insurers select the 90 percent coverage option. Insurers may purchase private market reinsurance to cover their hurricane losses for amounts below the retention, above their reimbursement limit, or for the coinsurance amount (10 percent – along side) that is the insurer’s responsibility for the layer of coverage provided by the FHCF. In fact, for some large national insurers, the FHCF is a small part of their total reinsurance program. The FHCF is not required to have the loss reserves that are required of insurers or reinsurers under state law. Financial reserves of the FHCF accumulated steadily through fiscal 2004 due to limited hurricane activity. Specifically, the FHCF had accumulated net assets of $5.5 billion at the end of the 2004 fiscal year (June 30, 2004). Following the 2004 and 2005 hurricane seasons, the FHCF reimbursed participating insurers over $5 billion, which has eliminated the reserves and created an estimated shortfall of $1.425 billion. Standard & Poor’s states that this cyclical financial performance is expected, given the nature of FHCF funding requirements. To support the capacity of the FHCF, revenue bonds may be issued. The Florida Hurricane Catastrophe Fund Finance Corporation was formed in 1996 to issue bonds and engage in such other financial transactions as are necessary to provide sufficient funds to achieve the purposes of the FHCF. The corporation is governed by a five-member board of directors, including the governor, chief financial officer, attorney general, director of the Division of Bond Finance of the State Board, and the Senior Officer. Revenue bonds issued are exempt from state and federal taxes. The corporation has authority to create preevent and postevent financing. In June 2006, the corporation undertook postevent financing of $1.35 billion to address its 2005 shortfall. In July 2006, the corporation undertook preevent financing of $2.8 billion to address 2006 liquidity needs. To pay debt service on outstanding revenue bonds and to reimburse insurers for the reimbursable losses under a covered event, the State Board directs the Office of Insurance Regulation to levy an emergency assessment which insurance companies collect from their policyholders. Emergency assessments are levied on premiums for all assessable lines of business in Florida. For 2006, there are 27 assessable lines, and medical malpractice policies will be added in 2010. In 2004, surplus lines insurers were added to the emergency assessment base. Excluded lines include accident and health, workers’ compensation, and federal flood insurance. The assessment base, which totaled $35 billion in 2005, has grown at a compound annual growth rate of 14.6 percent since 1970. Over 40 percent of the direct-written premium base is from auto insurance. In May 2006, a 1 percent emergency assessment was directed. The assessments are collected by insurance companies from policyholders and remitted to the FHCF throughout the year. Policyholders are required to pay the assessments, and insurers are required to treat the failure to pay the assessment as a failure to pay premium, which permits an insurer to cancel the policy. The maximum assessment in a single season is 6 percent of premium, and the aggregate limit is 10 percent of the premium base. Emergency assessments had never been assessed or collected prior to the levy of assessments relating to the issuance of the June 2006 bonds. Statewide assessments can also be levied for Florida Citizens and the state insurance guarantee fund. The emergency assessment of 1 percent for the FHCF is expected to be in place for 6 years. In addition to the person named above, Lawrence D. Cluff, Assistant Director; Joseph A. Applebaun, Patrick S. Dynes; Philip J. Curtin; Carrie Watkins; John P. Forrester; Emily R. Chalmers; Thomas J. McCool, Marc W. Molino; David S. Dornisch; and Tania L. Calhoun made key contributions to this report. National Flood Insurance Program: Preliminary Views on FEMA’s Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-285. Washington, D.C.: March 16, 2007. Definitions of Insurance and Related Information. GAO-06-424R. Washington, D.C.: February 23, 2006. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Crop Insurance: Actions Needed to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse. GAO-05-528. Washington, D.C.: September 30, 2005. Catalogue of Federal Insurance Activities. GAO-05-265R. Washington, D.C.: March 4, 2005. Catastrophe Risk: U.S. and European Approaches to Insure Natural Catastrophe and Terrorism Risks. GAO-05-199. Washington, D.C.: February 28, 2005. Catastrophe Insurance Risks: Status of Efforts to Securitize Natural Catastrophe and Terrorism Risk. GAO-03-1033. Washington, D.C.: September 24, 2003. Catastrophe Insurance Risks: The Role Risk-Linked Securities and Factors Affecting Their Use. GAO-02-941. Washington, D.C.: September 24, 2002. Insurers’ Ability to Pay Catastrophe Claims. GAO/GGD-00-57R. Washington, D.C.: February 8, 2000. Budget Issues: Budgeting for Federal Insurance Programs. GAO/AIMD- 97-16. Washington, D.C.: September 30, 1997. Natural Disaster Insurance: Federal Government’s Interests Insufficiently Protected Given Its Potential Financial Exposure. GAO-T- GGD-96-41. Washington, D.C.: December 5, 1995. Federal Disaster Insurance: Goals Are Good, but Insurance Programs Would Expose the Federal Government to Large Potential Losses. GAO/T- GGD-94-153. Washington, D.C.: May 26, 1994. Flood Insurance: Financial Resources May Not Be Sufficient to Meet Future Expected Losses. GAO/RCED-94-80. Washington, D.C.: March 21, 1994. Property Insurance: Data Needed to Examine Availability, Affordability, and Accessibility Issues. GAO/RCED-94-39. Washington, D.C.: February 9, 1994. Crop Insurance: Federal Program Faces Insurability and Design Problems. GAO/RCED-93-98. Washington, D.C.: May 24, 1993. Crop Insurance: Program Has Not Fostered Significant Risk Sharing by Insurance Companies. GAO/RCED-92-25. Washington, D.C.: January 13, 1992. Disaster Assistance: Crop Insurance Can Provide Assistance More Effectively Than Other Programs. GAO/RCED-89-211. Washington, D.C.: September 20, 1989. Congress Should Consider Changing Federal Income Taxation of the Property/Casualty Insurance Industry. GAO/GGD-85-10. Washington, D.C.: March 25, 1985. Gulf Coast Rebuilding: Observations on Federal Financial Implications. GAO-07-1079T. Washington, D.C.: August 2, 2007. Gulf Coast Rebuilding: Preliminary Observations on Progress to Date and Challenges for the Future. GAO-07-574T. Washington, D.C.: April 12, 2007. Small Business Administration: Additional Steps Needed to Enhance Agency Preparedness for Future Disasters. GAO-07-114. Washington, D.C.: February 14, 2007. Hurricanes Katrina and Rita: Unprecedented Challenges Exposed the Individuals and Households Program to Fraud and Abuse; Actions Needed to Reduce Such Problems in Future. GAO-06-1013. Washington, D.C.: September 27, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Disaster Relief: Governmentwide Framework Needed to Collect and Consolidate Information to Report on Billions in Federal Funding for the 2005 Gulf Coast Hurricanes. GAO-06-834. Washington, D.C.: September 6, 2006. Small Business Administration: Actions Needed to Provide More Timely Disaster Assistance. GAO-06-860. Washington, D.C.: July 28, 2006. Federal Disaster Assistance: What Should the Policy Be? PAD-80-39. Washington, D.C.: June 16, 1980. | In recent years, much attention has been focused on the roles that the private sector and federal government play in providing insurance and financial aid before and after catastrophic events. In this context, GAO examined (1) the rationale for and resources of federal and state programs that provide natural catastrophe insurance; (2) the extent to which Americans living in catastrophe-prone areas of the United States are uninsured and underinsured, and the types and amounts of federal payments to such individuals since the 2005 hurricanes; and (3) public policy options for revising the federal role in natural catastrophe insurance markets. To address these questions, GAO analyzed state and federal programs, examined studies of uninsured and underinsured homeowners and federal payments to them, identified and analyzed policy options, and interviewed officials from private and public sectors in both high- and low-risk areas of the United States. GAO also developed a four-goal framework to help analyze the available options. The federal government and some states have developed natural catastrophe insurance programs that supplement or substitute for private natural catastrophe insurance. These programs were created because homeowner coverage for catastrophic events is often not available from private insurers at prices deemed affordable by insurance regulators. Large losses associated with natural catastrophes are some of the biggest exposures that insurers face. Particularly in catastrophe-prone locations, government insurance programs have tended not to charge premiums that reflect the actual risks that homeowners face, resulting in financial deficits. After a resource-depleting disaster, the programs have postfunded themselves through, among other sources, payments from insurance companies and policyholders and appropriations from state and federal taxpayers. Large numbers of Americans are not insured for natural catastrophes. Homeowners may not purchase natural catastrophe insurance because doing so is voluntary and they may not believe that the risk justifies the expenditure. In addition, some homes may be underinsured--that is, not insured for the full replacement value. GAO estimates that the federal government made about $26 billion available to homeowners who lacked adequate insurance in response to the 2005 Hurricanes Katrina, Rita, and Wilma. Given the unsustainable fiscal path of federal and state governments, they will be challenged to maintain their current fiscal role. As Congress reevaluates the role of the federal government in insuring for natural catastrophes, Congress is faced with balancing the often-competing goals of ensuring that citizens are protected and limiting taxpayer exposure. This report examines seven public policy options for changing the federal government's role, including establishing an all-perils homeowner insurance policy, providing reinsurance for state catastrophe funds, and creating a mechanism to provide federal loans for state catastrophe funds. Each option has advantages and disadvantages, especially when weighed against competing public policy goals. For example, establishing an all-perils homeowner policy is a private sector approach that could help create broad participation. But low-income residents living in parts of the United States with high catastrophe risk could require subsidies, resulting in costs to the government. Similarly, federal reinsurance for state programs could lead to broader coverage, but could displace private reinsurance. GAO also identified several policy options for tax-based incentives for insurance companies, homeowners, investors, and state governments. But these options, which could help recipients better address catastrophe risk, could also result in ongoing costs to taxpayers. While some options would address the public policy goals of charging risk-based rates, encourage broad participation, or promote greater private sector participation, these policy goals need to be balanced with the desire to make rates affordable. |
Labor required states to implement major provisions of WIA Title I by July 1, 2000, although some states began implementing provisions of WIA as early as July 1999. Services provided under WIA represent a marked change from those provided under the previous program, allowing for a greater array of services to the general public. WIA is designed to provide for greater accountability over what existed previously: it established new performance measures and a new requirement to use UI data to track and report on the achievements of the three WIA-funded programs. WIA also requires that many federal programs work together to provide employment and training services through the one-stop system. Program services provided under WIA represent a marked change from those provided under JTPA. When WIA was enacted in 1998, it replaced the JTPA programs for economically disadvantaged adults and youth and for dislocated workers with three new programs—WIA Adult, Dislocated Worker, and Youth—that provide a broader range of services to the general public, no longer using income to determine eligibility for all program services. The newly authorized WIA programs no longer focus exclusively on training but provide for three tiers, or levels, of service for adults and dislocated workers: core, intensive, and training. Core services include basic services such as job searches and labor market information. These activities may be self-service or require some staff assistance. Intensive services include such activities as comprehensive assessment and case management—activities that require greater staff involvement. Training services include such activities as occupational skills or on-the- job training. These tiers of WIA-funded services are provided sequentially. That is, in order to receive intensive services, job seekers must first receive at least one core service; to receive training services, a job seeker must first receive at least one core service and then at least one intensive service. Key to moving from core to a higher level of services is that the services are needed to help job seekers become self-sufficient. Labor’s guidance provides for monitoring and tracking to begin when job seekers receive core services that require significant staff assistance. Job seekers who receive core services that are self-service in nature are not included in the performance measures. WIA is designed to provide for greater accountability than the accountability provided for under JTPA. It does so by establishing new performance measures and a new requirement to use UI data to track and report on the achievements of the three WIA-funded programs. According to Labor, performance data collected from the states in support of the measures are intended to be comparable across states in order to maintain objectivity in determining incentives and sanctions. They are also intended to provide information to support Labor’s performance goals under the Government Performance and Results Act (GPRA) and for program evaluation. Some of the measures that relate to adults, dislocated workers, and older youth are similar to those used under JTPA, including job placement, job retention, and wage gains or replacement. Attainment of a credential—a degree or certification of skills or training completed—and customer satisfaction for both job seekers and employers are new under WIA. (See table 1 for a complete list of the WIA performance measures and appendix I for a more complete explanation of the performance measures discussed in this report.) In contrast to JTPA, for which data on outcomes were obtained through follow-ups with job seekers, WIA requires states to use UI wage records to track outcomes. According to Labor’s guidance, if a program participant does not appear in the UI wage records, states may use supplemental data sources, such as follow-ups with participants and employers, to track entered employment, retention, and credential attainment. However, only UI wage records may be used to calculate earnings change and replacement. Unlike JTPA, which established expected performance levels using a computer model, WIA requires states to negotiate with Labor to establish expected performance levels for each measure. States, in turn, must negotiate performance levels with each local area. The law requires that these negotiations take into account differences in economic conditions, participant characteristics, and services provided. To derive equitable performance levels, Labor and the states use historical data to develop their estimates of expected performance levels. These estimates provide the basis for negotiations. WIA holds states accountable for achieving their performance levels by tying those levels to financial sanctions and incentive funding. States that meet their performance levels under WIA are eligible to receive incentive grants that may generally range from $750,000 to $3 million. To be eligible for an incentive grant, states must also meet levels established under the Department of Education’s Vocational Education (Perkins Act) and Adult Education and Literacy programs. States that do not meet their performance levels under WIA are subject to sanctions.If a state fails to meet its performance levels for 1 year, Labor provides technical assistance, if requested. If a state fails to meet its performance levels for 2 consecutive years, it may be subject to up to a 5-percent reduction in its annual WIA formula grant. Under JTPA, the most stringent sanction was the possible reorganization of the local service delivery organization. In addition to establishing the three new programs, WIA requires that states use the one-stop center system to provide services for these and many other employment and training programs. This system was developed by states prior to WIA through One-Stop Planning and Implementation Grants from Labor. About 17 programs funded through four federal agencies are now required to provide services through the one-stop center under WIA. Table 2 shows the programs that WIA requires to provide services through the one-stop centers (termed mandatory programs) and the related federal agency. Department of Health and Human Services Department of Housing and Urban Development (HUD) Under WIA, employers are expected to play a key role in establishing regional workforce development policies, deciding how services should be provided in the one-stop, and overseeing one-stop operations. Employers, who are encouraged to use the one-stop system to fill their job vacancies, are also seen as key one-stop customers under WIA. States and localities are taking action to implement performance measures for the three WIA-funded programs, but they have confronted several challenges in doing so. To implement these measures, states and localities had to change the way they collected and reported performance data. Most states we surveyed had to create new automated data systems to collect and report WIA data. Many state systems, however, are still not completely in place. The lack of final guidance from Labor on how to report the data slowed the development of these systems. States and localities also faced challenges in implementing the measures due to their complexity and the resource demands they created, and some had to develop new procedures to obtain UI wage records. In addition, states faced a new negotiation process with Labor to set performance levels for each measure. Many states believe these levels are too high because little or no baseline data were used, and the negotiations did not sufficiently account for differences in economic conditions and populations served. Under WIA, performance levels are now tied to incentives and sanctions so that states can be financially rewarded if they meet them or penalized if they do not. States reported that the need to meet these performance levels may lead local staff to focus WIA-funded services on job seekers who are most likely to succeed in their job search or who are most able to make wage gains. As part of implementing WIA performance measures, states had to develop automated data systems to track the activities of individual WIA participants and report on performance. Based on our survey, most states developed a new automated data system, or management information system (MIS), to collect and report WIA performance data at the state level. The remaining states adapted their previous data collection systems used under JTPA. However, 15 states, regardless of whether they were developing a new system or adapting their existing system, reported that, as of August 2001, they did not have their system completely in place. All states expect to have completed their systems by July 2002. In some states, local areas do not use the state MIS to collect local WIA performance information. In these states, local areas must develop their own systems, taking time and resources to do so. The lack of timely reporting guidance slowed the development of the data systems. Final guidance on how states must report their performance to Labor was issued to the states in March 2001—8 months after the states were required to implement major provisions of WIA and begin collecting data. Lack of final guidance resulted in delays and costly program changes as states and local areas developed and adjusted their final systems. For example, one local area we visited decided to continue using its old system and delayed the development of a new system pending final guidance because it would be too costly and time-consuming to develop a system that might need to be changed. All states, regardless of whether or not they had implemented a new system, had to make changes in their automated MIS systems to accommodate the final guidance. States and localities reported that the complexity of WIA’s new performance measures made them difficult and time-consuming to implement. Many of the states that we surveyed commented that the measures were hard to follow because the calculations for the measures are complex and sometimes confusing, specifically who to include in the measures, when to collect the data for the measures, and how to calculate the measures. Knowing who to include in the measures. It is difficult to know whether or not a job seeker should be counted in the measure for a program, even if he or she is served by the program. For example, a participant in the adult program who is already employed must be included in the retention and wage gain measures but cannot be counted in the entered employment measure. Yet, for the dislocated worker program, the entered employment measure can include those who may be employed when they enter the program. Knowing when to measure performance for participants. The data for different performance measures can be collected in different quarters of the program year. For example, for customer satisfaction, data can be collected at two points in time, depending on how the participant exited the program (see fig. 1). Data on entered employment for participants in the adult program are collected in the third quarter after exit; retention data are collected in the fifth quarter after exit. Data on earnings change and replacement are collected at two points in time: pre-program earnings data can be collected at registration, and post-program earnings data are collected in the fifth quarter after exit. If a program participant does not appear in the UI wage records, local staff can collect supplemental data to establish employment for the participant, but this must be recorded within 30 days after the WIA participant is found missing from the wage records. For entered employment, staff can collect supplemental data in the fourth quarter after a participant leaves (or exits) the program, but for retention, staff can collect it in the sixth quarter after exit. Supplemental data cannot be used to measure earnings change and replacement. Because the timing of data collection is complex and can be confusing, one local area in Oregon developed a tool it calls the “bean counter” to help local staff determine when to follow-up with participants so their performance counts in the calculations (see fig. 2). Knowing how to calculate the measures. In order to calculate the measures, states must account for a variety of factors. The type and combination of these factors determine the calculations that will be used. For example, in calculating some of the measures for the adult program, states must consider (1) whether the job seeker is employed at registration, (2) whether he or she is employed at both the first and third quarters after exit, and (3) whether the data source used to confirm employment was UI records or supplemental data. This information in various combinations results in 14 different ways that adult participants can be grouped in order to calculate the measures. In addition to noting the complexity of the measures, state and local officials said that the new measures taxed their resources. States had to develop procedures to collect data for the new customer satisfaction measure in compliance with detailed guidance from Labor. The guidance calls for states to conduct a telephone survey from a random sample large enough to obtain 500 completed surveys from both participants and employers. Because this guidance changed over time, states had to revise their procedures accordingly. For example, a revision to the guidance issued in October 2001 required states to maintain an up-to-date list of participants’ names and addresses from which to sample—a requirement that was originally voluntary. One indication of states’ progress in implementing these measures may be reflected in their ability to submit complete quarterly reports. Quarterly reports require data on all 17 performance measures. For the quarterly report that was due in May 2001, all states submitted their reports, but, according to Labor, only 16 states were able to provide data for all 17 performance measures. For the quarterly report that was due in August 2001—more than 1 year after WIA implementation—all states submitted their reports, but only 23 could provide data on employer customer satisfaction. According to Labor, states could not fully report on customer satisfaction because they have not yet fully implemented procedures to measure it. One state had to compile the data manually because its MIS was not fully operational. WIA’s new requirement that states use UI wage records to measure outcomes has led states to adopt new procedures to access these and other sensitive records. Unlike JTPA, which relied on surveys of participants to collect information on employment and earnings, WIA requires UI wage records to be used as the primary data source of employment and wage information—and the only data source for some measures, according to Labor’s guidance. To obtain employment and earnings information, states match information collected on individual WIA participants against state UI wage records. To access UI data from the state agency that oversees the UI database, some states had to establish data-sharing agreements. In Mississippi, for example, the agency responsible for overseeing WIA—the Mississippi Development Authority (MDA)—had to make arrangements with the agency that oversees the UI data—the Mississippi Employment Security Commission—to have them match the wage records and provide the results to MDA. In addition, some states may be more rigorous in protecting the confidentiality of UI records through privacy laws, which may add obstacles to collecting performance data. For example, Oregon law prohibits the release of WIA participants’ records without informed consent. Consequently, program providers had to enter into an agreement that established a protocol for collecting and sharing the data—one that developed safeguards to protect confidentiality. In addition, the state had to develop a process to ensure that WIA participants consented to the use of their protected records in this way. All the states we visited believed that some of the established performance levels for their measures were set too high for them to meet—either because they were set in absence of historical or baseline data or because negotiations did not sufficiently account for variations in economic conditions or population served. States reported that limitations in available baseline data made it difficult to set fair, realistic performance levels. The new measures on credentials and customer satisfaction, for instance, had no prior data available on which to set performance levels. Where baseline data were available, such as for the wage-related measures, the data were collected under JTPA, a program whose goals were different from those of WIA. In addition, some states believe that the performance levels did not account for variations in economic conditions, such as the slow growth in new or existing businesses that some areas have experienced. Performance levels also did not account for the many economically disadvantaged or hard-to-serve individuals seeking services in some local areas. Many states reported that the need to meet performance levels may be the driving factor in deciding who receives WIA-funded services at the local level. All the states we visited told us that local areas are not registering many WIA participants, largely attributing the low number of WIA participants to concerns by local staff about meeting performance levels. Local staff are reluctant to provide WIA-funded services to job seekers who may be less likely to get and keep a job. One state official described how local areas were carefully screening potential participants and holding meetings to decide whether to register them. As a result, individuals who are eligible for and may benefit from WIA-funded services may not be receiving services that are tracked under WIA. Performance levels for the measures that track earnings change for adults and earnings replacement for dislocated workers may be especially problematic. Several state officials reported that local staff were reluctant to register already employed adults or dislocated workers. Officials in one state reported that some local areas had not yet registered any dislocated workers. State and local officials explained that it would be hard to increase the earnings of adults who are already employed or replace the wages of dislocated workers, who are often laid off from high- paying, low- skilled jobs or from jobs that required skills that are now obsolete. In addition, for dislocated workers, employers may provide severance pay or workers might work overtime prior to a plant closure, increasing these workers’ earnings before they are dislocated. As a result, many dislocated workers who come to the one-stop center have earned high wages just prior to being dislocated, making it hard to replace—let alone increase— their earnings. If high wages are earned before dislocation and lower wages are earned after job placement through WIA, the wage change will be negative, depressing the wage replacement level. As a result, a local area may not meet its performance level for this measure, discouraging service to those who may need it. A hypothetical example involving two workers dislocated at the same time illustrates this point (see table 3). One worker is a sales clerk with limited skills earning $25,000, the other a long-time factory worker with obsolete skills earning $60,000. Both are laid off from work and go to their local one-stop center seeking job placement assistance. The clerk is placed in a new job as a receptionist paying $25,000. By calculating his wage replacement from his salary as a clerk, the one-stop can claim a wage replacement rate of 100 percent. The factory worker eventually gets a job as a security guard earning $30,000, netting a wage replacement rate of 50 percent. As this example shows, a one-stop center can meet its performance levels more easily by serving the clerk than by serving the factory worker even though both job seekers may need the one-stop system’s resources to find a job or enhance their skills. Some states and Labor are making efforts to address this disincentive to serve certain job seekers. Indiana instituted a policy allowing local areas to adjust their dislocated worker wage replacement rate in light of the significant dislocations they are facing. Texas uses a regression model to establish local performance levels that adjust for differences in factors, such as economic conditions and the characteristics of individuals served. Without this policy, said a Texas official, WIA programs would have registered fewer workers. Similarly, Michigan substantially reduced the penalties to local areas for failing to meet performance levels and found that the number of registered participants increased as a result of instituting less threatening sanctions. WIA requires that states be allowed to renegotiate their performance levels based on unanticipated circumstances. Labor is currently developing criteria that states can use to renegotiate their performance levels based on unanticipated circumstances, such as changes in economic conditions due to plant closings or shifts in unemployment for the current and future years. The guidance is expected to be released soon. Even when fully implemented, WIA performance measures may still not provide an accurate picture of performance for the three WIA-funded programs largely because data are neither comparable across states nor timely. State and local officials generally supported many of the performance measures as relevant indicators of the success of an employment and training program. However, the performance data collected and reported by states and localities are not comparable largely because of the lack of clear guidance on when to collect and report performance data and what constitutes a credential—the attainment of a certified skill or degree. In addition, while UI wage records are one of the best available sources of employment and earnings data, limitations in the data may hinder the ability of states and local areas to meet their performance levels and use the measures for short-term program management. State and local officials in the states we visited generally support many of the performance measures as relevant indicators of the success of an employment and training program. For example, several officials cited the wage-related measures, such as job placement, retention, and earnings change, as important indicators of a successful employment and training program. The measures are also generally consistent with the goal of WIA to help individuals get and keep jobs and increase their wages and skills. In addition, the states noted that the measures provide a good basis for long-term evaluation. However, the performance data collected and reported by states and localities are not comparable—a critical component in creating a level playing field from which states’ relative performance can be evaluated. While there are various reasons that performance data are not comparable, one of the chief reasons is the lack of clear guidance for collecting and reporting performance data on participants. Labor has provided detailed written guidance to states on who should be registered under WIA and when this registration should occur, but the guidance is open to interpretation in some areas. The lack of a uniform understanding of when registration occurs and thus who should be counted toward the measures raises questions about both the accuracy and comparability of states’ performance data. For example, the guidance tells states to register adults and dislocated workers who need significant staff assistance designed to help with job seeking or acquiring occupational skills, but the state can decide what constitutes significant staff assistance. The guidance provides examples of when to register job seekers, but it sometimes requires staff to make subtle and subjective distinctions. For example, those who receive initial assessment of skill levels and the need for supportive services are not to be registered; those requiring comprehensive assessment or staff-assisted job search and placement assistance, including career counseling, are to be registered. In another example involving the classification of workshops, job seekers who participate are to be registered in some cases, but not in others. Labor has allowed states and local areas flexibility in implementing the registration policy, and we found that local areas differed on when they registered WIA job seekers. In one local area we visited, the one-stop center registers most job seekers who come into the center, even if staff assistance is minimal. At this center, a general orientation is sufficient for the job seeker to be registered under WIA. In contrast, another center in the same state registers only those job seekers who require significant staff assistance and are likely to benefit from intensive services. Similar disparities occurred in other states we visited. Labor has said there is little consistency across states in registering participants and has convened a work group to develop additional guidance on registration, but as yet, the issues remain unresolved. The lack of a definition for the credential measure is also leading to performance data that are not accurate or comparable across states. Labor allows the states and local areas to determine what constitutes a credential and to develop a statewide list of approved credentials with input from employers. Because states and, in many cases, local areas must define what constitutes a credential, what is currently counted as a credential differs within and across states. Some states may strictly define credentials to include only diplomas from accredited institutions or use only formal training completion criteria as defined by education partners. Other states may expand their criteria to count a broad variety of credentials, such as job readiness, on-the-job experience, and completion of workshops. Labor officials note that states’ performance levels for the credential measure are negotiated to take state and local definitions into account, and the measure is intended to help local employers gauge the readiness and skill level of job seekers. Nevertheless, given the broad range of definitions states and localities employ, the outcomes on the credential measure may be of limited value, even within a single state. UI wage records are one of the best available data sources for tracking the employment and earnings of individuals—a significant improvement over the less objective self-reporting methods of JTPA—but the limitations of the database pose challenges that need to be addressed. These challenges, if unresolved, may hinder states’ ability to meet their performance levels. As we have reported in prior work, one such limitation is that UI wage records, while covering about 94 percent of workers, exclude certain employment categories, such as self-employed persons, most independent contractors, military personnel, federal government workers, and postal workers. States, therefore, must develop alternative methods to track WIA participants who are employed in these uncovered occupations. Pennsylvania, for example, developed a partnership with other states in its region to share the cost of purchasing the rights to federal civil service and military personnel data. And Florida has developed agreements with the Department of Defense, the Office of Personnel Management, and the U.S. Postal Service to access employment and wage information on an annual basis. Our survey data indicate that 33 states are using additional or supplemental data to compensate for uncovered occupations, with only 27 of those using the supplemental data to count towards their performance levels. Thus, at least 23 states have not used additional data to help them meet their performance levels. Another limitation is that state UI databases include only wage record information on job seekers who get jobs within their state; they do not track job seekers who find jobs in other states. States cannot readily access UI wage records from other states to track outcomes under WIA, making it difficult to track individuals who receive services in one state but get a job in another. Over one-third of all of the states we surveyed reported that an estimated 16 to 30 percent of cases are not being picked up by their state’s UI wage record system. To fill in these gaps, seven states have agreements with other states—often those that share a common border—to exchange UI information. Indiana, for example, established an agreement with Illinois to trade data. If data are missing on particular participants, Illinois sends the cases to Indiana to see if the Indiana UI wage records have information on the job seeker. The value of these agreements, however, may be limited because job seekers may find work in a state that does not have an agreement with the one in which they received services. Another way to obtain UI data on workers who are employed out of state is through WRIS, a clearinghouse that makes UI wage records available to states seeking employment and wage information on their WIA participants. This information can provide outcome data on WIA participants to help states meet their performance levels. While WRIS was available for states to use by July 2001, only 7 out of the 50 states are currently able and ready to participate, with 8 others in various stages of completing the requirements for participation. Although many states have shown an interest in a system such as WRIS, many are reluctant to participate because Labor, while agreeing to cover all the costs of operating WRIS for its first year, has not yet agreed to pay for subsequent years. The estimated total cost of operating WRIS is $2 million annually, but states have not been given a definitive answer about how much it would cost them to participate after this first year if Labor does not continue funding. Because of this uncertainty regarding future costs, states are hesitant to commit to participation in WRIS. If not all states participate, the value of WRIS will be diminished—even for participating states—because no data will be available from nonparticipating states’ UI wage records. The lack of timely data, due to the time lag in obtaining UI wage records, makes it difficult for state and local officials to use the performance measures for short-term program management—because, for the wage- related measures, current available data on the measures will reflect performance from the previous program year. While UI wage records are the best available data source for documenting employment, the data collection and reporting process is slow and time-consuming. Data are generally collected from employers only once every quarter, and employers have 30 days after the quarter ends to report the data. In many states, employers—especially small businesses—are allowed to submit data in paper format, which then must be converted to electronic media. After data entry, information must be checked for errors and corrected. All of these steps take time. As a result, WIA program administrators are unable to get a timely picture of program performance. For example, we asked states in our survey how quickly job placement outcome data would be available to them from UI wage records. On the basis of our survey, we found that for 30 states, the earliest time period that job placement data would be available is 6 months or more after an individual entered employment, with 15 states reporting that it may take 9 months or longer. Similarly, for the employment retention measure, over half of states report that obtaining this information could take a year or longer. (See fig. 3.) The time delay in receiving UI wage record data makes it difficult for state and local officials to use the performance measures to gauge the effectiveness of their services. States report that not being able to get performance results in the same program year is a problem: it makes it difficult to manage programs and improve one-stop services. Labor reports that the performance measures are not intended to be a management tool. State and local officials, therefore, must develop alternative methods if they want to assess the quality of their services so they can identify problems and improve programs in a timely way. Labor has encouraged these efforts, but, while some local areas are finding ways to collect data to help them manage their programs, there is no cohesive effort at the federal level to share strategies and promising approaches for the Adult and Dislocated Worker Programs. Although there are performance measures for the three WIA-funded programs and most of the programs required to provide services in the one-stop, no measures exist to assess how well the overall one-stop systems are working. The success of the one-stop system as a whole is not captured by the program measures of individual one-stop partners. Furthermore, combining the performance measures from mandatory programs does not provide a comprehensive picture of one-stop performance. Even when measures appear the same, comparing them is difficult because of differences in definitions and calculations. Beyond failing to provide a complete evaluation of one-stop performance, state officials reported that the separate reporting requirements of the partner programs have hampered coordination within their one-stop systems. While WIA did not establish any comprehensive measures to assess the overall one-stop system, it required that Labor take the lead in developing optional measures to help states assess progress toward their workforce investment goals. Labor has made limited progress on such performance measures, and only a few states have developed their own overall measures. The existing performance measures for participating one-stop programs fail to capture important one-stop features. First, it is difficult to get an unduplicated count of job seekers using the one-stop. While an individual may have multiple outcome measures for the services received through each of the programs at a one-stop, there is no single outcome measure for multiple services. In addition, separate reporting systems for each of the programs make it difficult to disaggregate data and track an individual’s outcome for those receiving multiple services. Second, other important aspects of one-stop performance are not included within the existing measures. Customer satisfaction measures used in support of WIA-funded programs and the Employment Service fail to measure how job seekers and employers believe they are being served by the one-stop system as a whole. Instead, these measures show satisfaction with the individual programs. Employer satisfaction is important under WIA because WIA created a more private-sector driven system. Capturing customer satisfaction of the system as a whole would reflect whether job seekers are successful at attaining the services they need to get jobs and would assess whether employers are satisfied with job applicants sent to them from different one-stop programs. Finally, state and local officials expressed concern that a large portion of one-stop participants are not included in performance measures. Many job seekers use self-service and informational activities but they are not tracked and counted in any program measures. While staff time and resources are used to establish and maintain self-service resource rooms and web sites, job seekers who use only these services will not be included in any of the performance measures. Without any information on individuals who use self-service, it will be difficult for Labor to show how effectively one-stops are being used. Performance measures for different programs often track similar outcomes, as figure 4 shows. However, the measures cannot be combined to obtain an overall view of one-stop performance. Although the same terms are used in various performance measures, their definitions are not identical. For example, while WIA older and younger youth programs define youth as being between the ages of 14 and 21, the laws governing Job Corps and HUD’s Youthbuild define youth as being between the ages of 16 and 24. Similarly, the definition of veterans is different for the Employment Service and Veterans’ Employment and Training Service program. The differences in definitions mean that assessing the outcomes for youth or veterans by combining the performance measures of individual programs within the one-stop setting would be difficult. Besides variations in definitions, there are also variations in how measures are calculated in different programs. For example, while the entered employment rate for WIA’s adult program is defined as the percentage of workers who get a job by the end of the first quarter after exit, the entered employment rate for the Employment Service is defined as the percentage of workers who get a job or changed employers in the first or second quarter after registration. As a result, performance data from these separate programs cannot be combined to yield a single overall score to assess various performance outcomes of the one-stop system. The Office of Management and Budget (OMB) convened a work group representing federal WIA partners to look at common definitions and measures across programs, which may issue guidance to states and localities on WIA performance measures and federal requirements. Beyond failing to provide an overall picture of one-stop performance, state officials reported that separate performance measures impede the cooperation of one-stop partners. As a result, even though WIA was meant to establish a more coordinated workforce development system through the use of one-stops, over one-third of states surveyed expressed concern that individual program performance measures may impede this process. Some states even believed that separate measures caused competition among programs. They said that if staff did not understand that a participant could be counted in more than one program, they might not direct them to other one-stop programs. For example, one state reported in its written comments to our survey that competition for participants and duplication of services due to lack of coordination with other programs would continue as long as each program is required to meet its own performance and participation levels. Fourteen states volunteered in their written comments to our survey that the federal government should work to coordinate performance measures across programs or develop systemwide measures. In addition, while states agree that systemwide measures are needed, they caution against making any additional measures mandatory, since states are still adapting to the existing measures. Although WIA did not establish one-stop measures, it does require that Labor develop additional optional measures to assist states in assessing progress toward their workforce investment goals, which Labor has interpreted to include one-stop measures. Labor began developing workforce development performance measures to capture overall one- stop use after one-stop systems were piloted. Since the passage of WIA, Labor has continued its efforts to develop systemwide measures, but it has made limited progress. Labor convened a working group in September 2001 to develop additional indicators of one-stop performance. Partner representatives at this group included national workforce-related organizations including the National Association of State Workforce Agencies, National Governors’ Association, National Association of Counties, U.S. Conference of Mayors, and representatives of states and regional boards. This group is working to develop a menu of indicators that will help provide a comprehensive picture of WIA system activity. Such measures may include capturing information on self-service customers and the cost of services at the one-stop.These measures, while optional, would help provide information on overall one-stop use across the country if all states report on at least some of the measures. Labor plans to have guidelines for these optional indicators in place for use in program year 2002, which begins on July 1, 2002. In order for states to be able to implement them for the coming program year, Labor will need to provide final guidance well before July 2002. In addition to these national efforts, some states, on their own initiative, have attempted to develop additional measures for one-stop systems, but these efforts are not coordinated and do not allow for nationwide assessment of the one-stop system. According to our survey, eight states have created or are developing additional systemwide measures, but of those, only three are reporting them to Labor. Pennsylvania, for example, developed five measures specific to its one-stop system’s performance. These indicators, intended to measure the overall effectiveness of the one- stop system, include median cycle time to fill a job, and the percentage of employers and individuals using services through the one-stop. Florida, on the other hand, has developed “tiers” of measures that focus on the outcome of their workforce development programs. In the first tier, state- generated systemwide indicators measure many employment and training programs together. The second tier clusters similar types of programs and captures measures relevant to particular groups (e.g., continued education status of youth in youth programs). The third tier captures all the federally mandated measurements, as well as measures for the other tiers, such as caseloads for specific programs. In this way, Florida has attempted to measure the system overall as well as outcomes for individual programs. Measures developed by other states include the number of people using the resource rooms at one-stops and the increase over time in the number of unemployed people getting a job. Despite these states’ efforts, the absence of nationally established systemwide measures means that Labor cannot ensure nationwide comparability. WIA represents a fundamental shift in the way federally funded employment and training services are provided to job seekers and in the way WIA programs measure and monitor success. Despite obstacles, in just over a year states have made good progress in implementing the new requirements under WIA—developing new processes and designing new systems. Labor, for its part, has been working to find ways to allow states and localities greater flexibility to design their programs to meet local needs and has been actively seeking opportunities for states to have input into the process, particularly in the area of performance measurement. But given the challenges states have faced in implementing the new performance measurement system, more time is needed before the measures can meaningfully gauge the success of the programs. This new performance measurement system under WIA is a high-stakes game—a state’s future funding and, therefore, its ability to serve its citizens may depend upon how well it performs compared to how well it is expected to perform. It should be no surprise that states and localities are designing their systems and processes in ways that will enhance their ability to meet their performance levels. Because states see the current performance levels as too high for the current economy, states and localities may choose not to serve those job seekers who may be helped by their services, but who may not help in achieving their negotiated performance levels. Unless the performance levels can be adjusted to truly reflect differences in economic conditions and the population served, local areas will continue to have a disincentive to serve some job seekers that could be helped. WIA’s requirement to use UI data to track outcomes is a step in the right direction—it provides federal, state, and local government entities with an objective means to evaluate program success. But it brings challenges that need to be addressed, and states will need help to do so. Establishing the means to routinely share data across state lines through WRIS and developing ways to share promising approaches in the use of supplemental data sources and in managing the assessment of short-term program needs would go far in moderating these challenges. Without this help and the cooperative efforts of states and localities toward this end, developing a useful performance measurement system will take longer and cost more. In general, WIA’s performance measurement system captures some useful information, but it may not capture all the right information. The measure to track credentials has limited value because it lacks a standard definition for what’s being measured. For other measures, the lack of clear definitions for whom to track limits their usefulness in drawing conclusions about program success at both the state and national levels. Without clear definitions and processes, the measures will not provide the Congress with a true picture of how well the programs are performing. Furthermore, WIA performance measures gauge only WIA-funded services; yet there is widespread agreement that measures are needed to gauge the effectiveness of the entire one-stop system. The system’s narrow focus on program outcomes for a limited number of participants misses a key requirement of WIA to support the movement toward a coordinated system. In fact, the measures may foster the opposite—a siloed approach that encourages competition among programs and limits their cooperation. Without global one-stop measures, the Congress will not be able to assess how well states and localities are doing in meeting WIA’s requirement to coordinate services. The lack of such measures may, instead, send a signal to states that service coordination is a minor goal. To give states and local areas more time to implement WIA performance measures and establish baseline data needed to determine performance levels, we recommend that the Department of Labor delay the application of financial sanctions for at least 1 year or until it is judged that states have their data systems sufficiently in place to successfully track WIA outcomes. To eliminate possible disincentives to serve some job seekers and ensure that states and local areas will not be unduly penalized for economic downturns, we recommend that the Department of Labor expedite the release of guidance on revising negotiated performance levels and allow states to immediately begin the process of re- negotiation. To ensure uniformity in data collection and reporting so that performance results are more accurate and comparable across states, we recommend that the Department of Labor provide clearer guidance using objective criteria on who should or should not be registered as a WIA participant for tracking purposes and, once the guidance is released, work proactively with states to implement it, and issue guidance delineating a clear definition for what constitutes a credential, and, once the guidance is released, ensure that states use it to report on this indicator. To help states address the challenges of using UI data to measure outcomes, we recommend that the Department of Labor continue to fully fund the Wage Record Interchange System in order to facilitate the sharing of UI data across state lines; and develop ways for states to share promising approaches in the use of supplemental data sources in closing the data gaps for covered and uncovered employment in UI; and develop ways for states to share promising approaches that help states address the UI timeliness issue, providing methods to help states monitor and improve their programs in a timely manner. To help states measure one-stop performance, we recommend that the Department of Labor ensure that the development of optional one-stop system measures is completed in enough time for states to implement them at the beginning of program year 2002. We provided a draft of this report to Labor for its review and comment. Labor’s comments are in appendix II. We incorporated comments and clarifications where appropriate. Labor generally agreed with our findings and recommendations, noting that they are consistent with information they have gathered from state and local partners. In its comments, Labor expressed concern that negotiated performance levels may be determining who receives WIA- funded services, indicating that it will work with states and local areas to address this issue. Labor also commented on our finding regarding the lack of clear guidance on certain policies, stressing the importance of state and local flexibility in determining specific policies and practices to fit local needs. While state and local flexibility is important, we continue to be concerned that the lack of a uniform understanding of when registration occurs and what constitutes a credential raises questions about both the accuracy and the comparability of states’ performance data. We are pleased to note that Labor is in the process of reviewing this issue. Finally, Labor cites its efforts to collaborate with states and local areas in developing a performance accountability system and increasing partnerships. We commend Labor for obtaining states’ input and participation in developing such a system. We are sending copies of this report to the Secretary of Labor, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. The report is also available on GAO’s home page at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix III. The following table includes descriptions of the performance measures reviewed in this report. It does not include the calculations required to attain the final value of the performance measures. Definition Of those who did not have a job when they registered for WIA, the percentage of adults who got a job by the end of the 1st quarter after exit. This measure excludes participants who are employed at the time of registration. Of those who had a job in the 1st quarter after exit, the percentage of adults who have a job in the 3rd quarter after exit. Of those who had a job in the 1st quarter after exit, the post-program earnings increases as compared with pre-program earnings. Of those adults who received WIA training services, the percentage who were employed in the 1st quarter after exit and received a credential by the end of the 3rd quarter after exit. The percentage of dislocated workers who got a job by the end of the 1st quarter after exit. This measure includes dislocated workers who are employed at the time of registration. Of those who had a job in the 1st quarter after exit, the percentage of dislocated workers who have a job in the 3rd quarter after exit. Of those who had a job in the 1st quarter after exit, the percentage of pre-program earnings being earned post-program. Since it may be difficult to find dislocated workers jobs with equivalent or better wages, this measure captures the percentage of earnings of the new job in relation to the old. Of those dislocated workers who received WIA training services, the percentage who were employed in the 1st quarter after exit and received a credential by the end of the 3rd quarter after exit. Of those who are not employed at registration and who are not enrolled in post-secondary education or advanced training in the 1st quarter after exit, the percentage of older youth who have gotten a job by the end of the 1st quarter after exit. This measure also excludes youth that move on to post-secondary education or advanced training and not employment. Of those who are employed in the 1st quarter after exit and who are not enrolled in post- secondary education or advanced training in the 3rd quarter after exit, the percentage of older youth that are employed in the 3rd quarter after exit. Of those who had a job in the 1st quarter after exit and who are not enrolled in post- secondary education or advanced training, the post-program earnings increases as compared with pre-program earnings. The percentage of older youth who are in employment, post-secondary education, or advanced training in the 1st quarter after exit and received a credential by the end of the 3rd quarter after exit. The average of three statewide survey questions rated 1-10: (1 being “very dissatisfied” and 10 being “very satisfied”): was the employer satisfied with services Definition did the service meet the expectations of the customer how well did the service compare to the ideal set of services The average of three statewide survey questions rated 1-10: was the participant satisfied with services did the service meet the expectations of the customer how well did the service compare to the ideal set of services A statewide telephone survey of a sample of 500 is conducted for all the WIA-funded programs. Abbey Frank, Mikki Holmes, and Amanda Ahlstrand made significant contributions to this report. In addition, James Wright assisted in the study design and the national survey; Jessica Botsford and Richard Burkard provided legal support; and Patrick DiBattista and Barbara Alsip assisted in the message and report development. U.S. General Accounting Office. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02- 72. Washington, D.C.: 2001. U.S. General Accounting Office. Veterans’ Employment and Training Service: Proposed Performance Measurement System Improved, But Further Changes Needed. GAO-01-580. Washington, D.C.: 2001. U.S. General Accounting Office. Multiple Employment and Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure. GAO-01-71. Washington, D.C.: 2000. U.S. General Accounting Office. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T- HEHS-00-145. Washington, D.C.: 2000. | Congress passed the Workforce Investment Act (WIA) in 1998 to bring most federally funded employment and training services into a single, one one-stop center system. GAO assessed three programs that provide service through this system. States and localities have begun to implement the new performance measurement system for the three WIA-funded programs but report several challenges. States had to change the way they collected and reported performance data. They also faced challenges in implementing these measures due to their complexity and the resource demands created by new measures. Some developed new procedures to obtain access to sensitive records. The performance levels are of particular concern to state and local officials because failure to meet them can result in financial sanctions. As a result, states may be choosing to serve only those job seekers who are most likely to be successful. Even when fully implemented, performance measures may not provide a true picture of WIA-funded program performance because data are neither comparable across states nor timely. The measures include many of the indicators relevant to an employment and training program, such as getting and keeping jobs and increasing wages and skills. Although measures exist to gauge the performance of the three WIA-funded programs, there are no measures to gauge the performance of the one-stop system as a whole. At least 17 programs provide services through the one-stop system, and most have their own performance measures. Although these performance measures may be used for assessing outcomes for individual programs, they cannot be used to measure the success of the overall system. |
Federal regulation is one of the basic tools of government. Agencies issue thousands of rules and regulations each year to implement statutes enacted by Congress. The public policy goals and benefits of regulations include, among other things, ensuring that workplaces, air travel, foods, and drugs are safe; that the nation’s air, water, and land are not polluted; and that the appropriate amount of tax is collected. The costs of these regulations are estimated to be in the hundreds of billions of dollars, and the benefits estimates are much higher. Given the size and impact of federal regulation, Congresses and Presidents have taken a number of actions to refine and reform the regulatory process within the past 25 years. In September 1980, RFA was enacted in response to concerns about the effect that federal regulations can have on “small entities,” defined by the Act as including small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. As we have previously noted, small businesses are a significant part of the nation’s economy, and small governments make up the vast majority of local governments in the United States. However, there have been concerns that these small entities may be disproportionately affected by federal agencies’ regulatory requirements. RFA established the principle that agencies should endeavor, consistent with the objectives of applicable statutes, to fit regulatory and informational requirements to the scale of these small entities. RFA requires regulatory agencies—including the independent regulatory agencies—to assess the potential impact of their rules on small entities. Under RFA, an agency must prepare an initial regulatory flexibility analysis at the time a proposed rule is issued unless the head of the agency determines that the proposed rule would not have a “significant economic impact upon a substantial number of small entities.” Further, agencies must consider alternatives to their proposed rules that will accomplish the agencies’ objectives while minimizing the impacts on small entities. The Act also requires agencies to ensure that small entities have an opportunity to participate in the rulemaking process and requires the Chief Counsel for Advocacy of the Small Business Administration (Office of Advocacy) to monitor agencies’ compliance. Among other things, RFA also requires regulatory agencies to review, within 10 years of promulgation, existing rules that have or will have a significant impact on small entities to determine whether they should be continued without change or amended or rescinded to minimize their impact on small entities. Congress amended RFA with the Small Business Regulatory Enforcement Fairness Act of 1996 (SBREFA). SBREFA made certain agency actions under RFA judicially reviewable. Other provisions in SBREFA added new requirements. For example, SBREFA requires agencies to develop one or more compliance guides for each final rule or group of related final rules for which the agency is required to prepare a regulatory flexibility analysis, and it requires agencies to provide small entities with some form of relief from civil monetary penalties. SBREFA also requires the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration to convene advocacy review panels before publishing an initial regulatory flexibility analysis. More recently, in August 2002, President George W. Bush issued Executive Order 13272, which requires federal agencies to establish written procedures and policies on how they would measure the impact of their regulatory proposals on small entities and to vet those policies with the Office of Advocacy. The order also requires agencies to notify the Office of Advocacy before publishing draft rules expected to have a significant small business impact, to consider its written comments on proposed rules, and to publish a response with the final rule. The order requires the Office of Advocacy to provide notification of the requirements of the Act and training to all agencies on how to comply with RFA. The Office of Advocacy published guidance on the Act in 2003 and reported training more than 20 agencies on RFA compliance in fiscal year 2005. In response to congressional requests, we have reviewed agencies’ implementation of RFA and related requirements on many occasions over the years, with topics ranging from specific statutory provisions to the overall implementation of RFA. Generally, we found that the Act’s overall results and effectiveness have been mixed. This is not unique to RFA; we found similar results when reviewing other regulatory reform initiatives, such as the Unfunded Mandates Reform Act of 1995. Our past reports illustrated both the promise and the problems associated with RFA. RFA and related requirements have clearly affected how federal agencies regulate, and we identified important benefits of these initiatives, such as increasing attention on the potential impacts of rules and raising expectations regarding the analytical support for proposed rules. However, a recurring theme in our findings was that uncertainties about RFA’s requirements and varying interpretations of those requirements by federal agencies limited the Act’s application and effectiveness. Some of the topics we reviewed, and our main findings regarding impediments to RFA’s implementation, are illustrated in the following examples: We examined 12 years of annual reports from the Office of Advocacy and concluded that the reports indicated variable compliance with RFA across agencies, within agencies, and over time—a conclusion that the Office of Advocacy also reached in subsequent reports on implementation of RFA (on the 20th and 25th anniversaries of RFA’s enactment). We noted that some agencies had been repeatedly characterized as satisfying RFA requirements, but other agencies were consistently viewed as recalcitrant. Agencies’ performance also varied over time or varied by offices within the agencies. We said that one reason for agencies’ lack of compliance with RFA requirements was that the Act did not expressly authorize the Small Business Administration (SBA) to interpret key provisions and did not require SBA to develop criteria for agencies to follow in reviewing their rules. We examined RFA implementation with regard to small governments and concluded that agencies were not conducting as many regulatory flexibility analyses for small governments as they might, largely because of weaknesses in the Act. Specifically, we found that each agency we reviewed had a different interpretation of key RFA provisions. We also pointed out that RFA allowed agencies to interpret whether their proposed rules affected small governments and did not provide sufficiently specific criteria or definitions to guide agencies in deciding whether and how to assess the impact of proposed rules on small governments. We reviewed implementation of small business advocacy review panel requirements under SBREFA and found that the panels that had been convened were generally well received. However, we also said that implementation was hindered—specifically, that there was uncertainty over whether panels should have been convened for some proposed rules—by the lack of agreed-upon governmentwide criteria as to whether a rule has a significant impact. We examined other related requirements regarding agencies’ policies for the reduction and/or waiver of civil penalties on small entities and the publication of small entity compliance guides. Again, we found that implementation varied across and within agencies, with some of the ineffectiveness and inconsistency traceable to definitional problems in RFA. All of the agencies’ penalty relief policies that we reviewed were within the discretion that Congress provided, but the policies varied considerably. Some policies covered only a portion of agencies’ civil penalty enforcement actions, and some provided small entities with no greater penalty relief than large entities. The agencies varied in how key terms were defined. Similarly, we concluded that the requirement for small entity compliance guides did not have much of an impact, and its implementation also varied across, and sometimes within, agencies. RFA is unique among statutory requirements with general applicability in having a provision, under section 610, for the periodic review of existing rules. However, it is not clear that this look-back provision in RFA has been consistently and effectively implemented. In a series of reports on agencies’ compliance with section 610, we found that the required reviews were not being conducted. Meetings with agencies to identify why compliance was so limited revealed significant differences of opinion regarding key terms in RFA and confusion about what was required to determine compliance with RFA. At the request of the House Committee on Energy and Commerce, we have begun new work examining the subject of regulatory agencies’ retrospective reviews of their existing regulations, including those undertaken in response to Section 610, and will report on the results of this engagement in the future. We have not yet examined the effect of Executive Order 13272 and the Office of Advocacy’s subsequent guidance and training for agencies on implementing RFA. Therefore, we have not done any evaluations that would indicate whether or not those developments are helping to address some of our concerns about the effectiveness of RFA. While RFA has helped to influence how agencies regulate small entities, we believe that the full promise of the Act has not been realized. The results from our past work suggest that the Subcommittee might wish to review the procedures, definitions, exemptions, and other provisions of RFA, and related statutory requirements, to determine whether changes are needed to better achieve the purposes Congress intended. The central theme of our prior findings and recommendations on RFA has been the need to revisit and clarify elements of the Act, particularly its key terms. Although more recent developments, such as the Office of Advocacy’s detailed guidance to agencies on RFA compliance, may help address some of these long-standing issues, current legislative proposals, such as H.R. 682, make it clear that concerns remain about RFA’s effectiveness—for example, that agencies are not assessing the impacts of their rules or identifying less costly regulatory approaches as expected under RFA—and the impact of federal regulations on small entities. Unclear terms and definitions can affect the applicability and effectiveness of regulatory reform requirements. We have frequently cited the need to clarify the key terms in RFA, particularly “significant economic impact on a substantial number of small entities.” RFA’s requirements do not apply if an agency head certifies that a rule will not have a “significant economic impact on a substantial number of small entities.” However, RFA neither defines this key phrase nor places clear responsibility on any party to define it consistently across the government. It is therefore not surprising, as I mentioned earlier, that we found compliance with RFA varied from one agency to another and that agencies had different interpretations of RFA’s requirements. We have recommended several times that Congress provide greater clarity concerning the key terms and provisions of RFA and related requirements, but to date Congress has not acted on many of these recommendations. The questions that remain unresolved on this topic are numerous and varied, including: Does Congress believe that the economic impact of a rule should be measured in terms of compliance costs as a percentage of businesses’ annual revenues, the percentage of work hours available to the firms, or other metrics? If so, what percentage or other measure would be an appropriate definition of “significant?” Should agencies take into account the cumulative impact of their rules on small entities, even within a particular program area? Should agencies count the impact of the underlying statutes when determining whether their rules have a significant impact? What should be considered a “rule” for purposes of the requirement in RFA that agencies review rules with a significant impact within 10 years of their promulgation? Should agencies review rules that had a significant impact at the time they were originally published, or only those that currently have that effect? Should agencies conduct regulatory flexibility analyses for rules that have a positive economic impact on small entities, or only for rules with a negative impact? It is worth noting that the Office of Advocacy’s 2003 RFA compliance guide, while reiterating that RFA does not define certain key terms, nevertheless provides some suggestions on the subject. Citing parts of RFA’s legislative history, the guidance indicates that exact standards for such definitions may not be possible or desirable, and that the definitions should vary depending on the context of each rule and preliminary assessments of the rule’s impact. For example, the guidance points out that “significance” can be seen as relative to the size of a business and its competitors, among other things. However, the guidance does identify factors that agencies might want to consider when making RFA determinations. In some ways, this mirrors other aspects of RFA, such as section 610, where Congress did not explicitly define a threshold for an agency to determine whether an existing regulation should be maintained, amended, or eliminated but rather identified the factors that an agency must consider in its reviews. We do not yet know whether or to what extent the guidance and associated training has helped agencies to clarify some of the long-standing confusion about RFA requirements and terms. Additional monitoring of RFA compliance may help to answer that question. Congress might also want to consider whether the factors that the Office of Advocacy suggested to help agencies define key terms and requirements are consistent with congressional intent or would benefit from having a statutory basis. I also want to point out the potential domino effect of agencies’ determinations of whether or not RFA applies to their rules. This is related to the lack of clarity on key terms mentioned above, the potential for agencies to waive or delay analysis under RFA, and the limitation of RFA’s applicability to only rules for which there was a notice of proposed rulemaking. The impact of an agency head’s determination that RFA is not applicable is not only that the initial and final regulatory flexibility analyses envisioned by the Act would not be done, but also that other related requirements would not apply. These requirements include, for example, the need for agencies to prepare small entity compliance guides, convene SBREFA advocacy panels, and conduct periodic reviews of certain existing regulations. While we recognize, as provided by the Administrative Procedure Act, that notices of proposed rulemaking are not always practical, necessary, or in the public interest, this still raises the question of whether such exemptions from notice and comment rulemaking should preclude future opportunities for public participation and other related procedural and analytical requirements. Our prior work has shown that substantial numbers of rules, including major rules (for example, those with an impact of $100 million or more), are promulgated without going through a notice of proposed rulemaking. We also believe it is important for Congress to reexamine, not just RFA, but how all of the various regulatory reform initiatives fit together and influence agencies’ regulatory actions. As I previously testified before this Subcommittee, we have found the effectiveness of most regulatory reform initiatives to be limited and that they merit congressional attention. In addition, we have stated that this is a particularly timely point to reexamine the federal regulatory framework, because significant trends and challenges establish the case for change and the need to reexamine the base of federal government and all of its existing programs, policies, functions, and activities. Our September 2000 report on EPA’s implementation of RFA illustrated the importance of considering the bigger picture and interrelationships between regulatory reform initiatives. On the one hand, we reported about concerns regarding the methodologies EPA used in its analyses and its conclusions about the impact on small businesses of a proposed rule to lower certain reporting thresholds for lead and lead compounds. The bigger picture, though, was our finding that after SBREFA took effect EPA’s four major program offices certified that almost all (96 percent) of their proposed rules would not have a significant impact on a substantial number of small entities. EPA officials told us this was because of a change in EPA’s RFA guidance prompted by the SBREFA requirement to convene an advocacy review panel for any proposed rule that was not certified. Prior to SBREFA, EPA’s policy was to prepare a regulatory flexibility analysis for any rule that the agency expected to have any impact on small entities. According to EPA officials, the SBREFA panel requirement made continuation of the agency’s more inclusive RFA policy too costly and impractical. In other words, a statute Congress enacted to strengthen RFA caused the agency to use the discretion permitted in RFA to conduct fewer regulatory flexibility analyses. In closing, I would reiterate that we believe Congress should revisit aspects of RFA and that our prior reports have indicated ample opportunities to refine the Act. Despite some progress in implementing RFA and other regulatory reform initiatives since 1980, it is clear from the introduction of H.R. 682 and related bills that Members of Congress remain concerned about the impact of regulations on small entities and the extent to which the rulemaking process encourages agencies to consider ways to reduce the burdens of new and existing rules, while still achieving the objectives of the underlying statutes. Mr. Chairman, this concludes my prepared statement. Once again, I appreciate the opportunity to testify on these important issues. I would be pleased to address any questions you or other Members of the Subcommittee might have at this time. If additional information is needed regarding this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, on (202) 512-6806 or at mihmj@gao.gov. Tim Bober, Jason Dorn, Andrea Levine, Latesha Love, Joseph Santiago, and Michael Volpe contributed to this statement. Federal Rulemaking: Past Reviews and Emerging Trends Suggest Issues That Merit Congressional Attention. GAO-06-228T. Washington, D.C.: November 1, 2005. Regulatory Reform: Prior Reviews of Federal Regulatory Process Initiatives Reveal Opportunities for Improvements. GAO-05-939T. Washington, D.C.: July 27, 2005. Regulatory Flexibility Act: Clarification of Key Terms Still Needed. GAO-02-491T. Washington, D.C.: March 6, 2002. Regulatory Reform: Compliance Guide Requirement Has Had Little Effect on Agency Practices. GAO-02-172. Washington, D.C.: December 28, 2001. Federal Rulemaking: Procedural and Analytical Requirements at OSHA and Other Agencies. GAO-01-852T. Washington, D.C.: June 14, 2001. Regulatory Flexibility Act: Key Terms Still Need to Be Clarified. GAO-01- 669T. Washington, D.C.: April 24, 2001. Regulatory Reform: Implementation of Selected Agencies’ Civil Penalty Relief Policies for Small Entities. GAO-01-280. Washington, D.C.: February 20, 2001. Regulatory Flexibility Act: Implementation in EPA Program Offices and Proposed Lead Rule. GAO/GGD-00-193. Washington, D.C.: September 20, 2000. Regulatory Reform: Procedural and Analytical Requirements in Federal Rulemaking. GAO/T-GGD/OGC-00-157. Washington, D.C.: June 8, 2000. Regulatory Flexibility Act: Agencies’ Interpretations of Review Requirements Vary. GAO/GGD-99-55. Washington, D.C.: April 2, 1999. Federal Rulemaking: Agencies Often Published Final Actions Without Proposed Rules. GAO/GGD-98-126. Washington, D.C.: August 31, 1998. Regulatory Reform: Implementation of the Small Business Advocacy Review Panel Requirements. GAO/GGD-98-36. Washington, D.C.: March 18, 1998. Regulatory Reform: Agencies’ Section 610 Review Notices Often Did Not Meet Statutory Requirements. GAO/T-GGD-98-64. Washington, D.C.: February 12, 1998. Regulatory Flexibility Act: Agencies’ Use of the October 1997 Unified Agenda Often Did Not Satisfy Notification Requirements. GAO/GGD-98- 61R. Washington, D.C.: February 12, 1998. Regulatory Flexibility Act: Agencies’ Use of the November 1996 Unified Agenda Did Not Satisfy Notification Requirements. GAO/GGD/OGC-97- 77R. Washington, D.C.: April 22, 1997. Regulatory Flexibility Act: Status of Agencies’ Compliance. GAO/GGD- 94-105. Washington, D.C.: April 27, 1994. Regulatory Flexibility Act: Inherent Weaknesses May Limit Its Usefulness for Small Governments. GAO/HRD-91-16. Washington, D.C.: January 11, 1991. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Federal regulation is one of the basic tools of government used to implement public policy. In 1980, the Regulatory Flexibility Act (RFA) was enacted in response to concerns about the effect that regulations can have on small entities, including small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. Congress amended RFA in 1996, and the President issued Executive Order 13272 in 2002, to strengthen requirements for agencies to consider the impact of their proposed rules on small entities. However, concerns about the regulatory burden on small entities persist, prompting legislative proposals such as H.R. 682, the Regulatory Flexibility Improvements Act, which would amend RFA. At the request of Congress, GAO has prepared many reports and testimonies reviewing the implementation of RFA and related policies. On the basis of that body of work, this testimony (1) provides an overview of the basic purpose and requirements of RFA, (2) highlights the main impediments to the Act's implementation that GAO's reports identified, and (3) suggests elements of RFA that Congress might consider amending to improve the effectiveness of the Act. GAO's prior reports and testimonies contain recommendations to improve the implementation of RFA and related regulatory process requirements. RFA established a principle that agencies should endeavor to fit their regulatory requirements to the scale of small entities. Among other things, RFA requires regulatory agencies to assess the impact of proposed rules on small entities, consider regulatory alternatives that will accomplish the agencies' objectives while minimizing the impacts on small entities, and ensure that small entities have an opportunity to participate in the rulemaking process. Further, RFA requires agencies to review existing rules within 10 years of promulgation that have or will have a significant impact on small entities to determine whether they should be continued without change or amended or rescinded to minimize their impact on small entities. RFA also requires the Chief Counsel for Advocacy of the Small Business Administration (Office of Advocacy) to monitor agencies' compliance. In response to Executive Order 13272, the Office of Advocacy published guidance in 2003 on how to comply with RFA. In response to congressional requests, GAO reviewed agencies' implementation of RFA and related requirements on many occasions, with topics ranging from specific statutory provisions to the overall implementation of RFA. Generally, GAO found that the Act's results and effectiveness have been mixed; its reports illustrated both the promise and the problems associated with RFA. On one hand, RFA and related requirements clearly affected how federal agencies regulate and produced benefits, such as raising expectations regarding the analytical support for proposed rules. However, GAO also found that compliance with RFA varied across agencies, within agencies, and over time. A recurring finding was that uncertainties about RFA's requirements and key terms, and varying interpretations by federal agencies, limited the Act's application and effectiveness. GAO's past work suggests that Congress might wish to review the procedures, definitions, exemptions, and other provisions of RFA to determine whether changes are needed to better achieve the purposes Congress intended. In particular, GAO's reports indicate that the full promise of RFA may never be realized until Congress revisits and clarifies elements of the Act, especially its key terms, or provides an agency or office with the clear authority and responsibility to do so. Attention should also be paid to the domino effect that an agency's initial determination of whether RFA is applicable to a rulemaking has on other statutory requirements, such as preparing compliance guides for small entities and periodically reviewing existing regulations. GAO also believes that Congress should reexamine not just RFA but how all of the various regulatory reform initiatives fit together and influence agencies' regulatory actions. Recent developments, such as the Office of Advocacy's RFA guidance, may help address some of these long-standing issues and merit continued monitoring by Congress |
Although VA has been authorized to collect third-party health insurance payments since 1986, it was not allowed to use these funds to supplement its medical care appropriations until enactment of the Balanced Budget Act of 1997. Part of VA’s 1997 strategic plan was to increase health insurance payments and other collections to help fund an increased health care workload. The potential for increased workload occurred in part because the Veterans’ Health Care Eligibility Reform Act of 1996 authorized VA to provide certain medical care services not previously available to veterans without service-connected disabilities or low incomes. VA expected that collections from third-party payments, copayments, and deductibles would cover the majority of costs for higher- income veterans without service-connected disabilities. These veterans increased from about 4 percent of all veterans treated in fiscal year 1996 to about 20 percent in fiscal year 2001. To collect from health insurers, as shown in figure 1, VA uses five related processes to manage the information needed to bill and collect. The patient intake process involves gathering insurance information and verifying that information with the insurer. The medical documentation process involves properly documenting the health care provided to patients by physicians and other health care providers. The coding process involves assigning correct codes for the diagnoses and medical procedures based on the documentation. Next the billing process serves to create and send bills to insurers based on the insurance and coding information. Finally, the accounts receivable process includes processing payments from insurers and following up with insurers on outstanding or denied bills. In 1999, VA adopted a new fee schedule, called “reasonable charges,” which are itemized fees based on diagnoses and procedures, and the new schedule allows VA to bill in a way that more accurately captures the care provided. Previously, VA had nine charges for inpatient care and one charge for outpatient care. These charges were not specific to the care provided. For example, VA had charged the same per diem rate for any patient in a surgical bed section regardless of the care provided. In addition, before adopting the new fee schedule, VA had billed all outpatient visits, including surgery, based on VA’s average outpatient cost of $229, a single rate that had limited VA’s ability to collect higher amounts for more expensive care. In contrast, when the reasonable charges fee schedule was adopted in September 1999, an outpatient hernia surgery charge increased to about $6,500; and an office visit charge for an established patient decreased to a range of about $22 to $149, depending on the care given. By linking charges to the care provided, VA created new bill-processing demands—particularly in the three areas of documenting care, coding that care, and processing bills per episode of care. First, VA must be prepared to provide an insurer supporting medical documentation for the itemized charges. Second, VA must accurately assign medical diagnoses and procedure codes to set appropriate charges, a task which requires coders to search through medical documentation and various databases to identify all billable care. Third, in contrast to a single bill for an episode of care under the previous fee schedule, under reasonable charges VA must prepare a separate bill for each provider involved in the care and an additional bill if a hospital facility charge applies. For fiscal year 2002, VA collected third-party payments of $687 million, a 32 percent increase over its fiscal year 2001 collections. The increased collections in fiscal year 2002 resulted from VA’s submitting and collecting for more bills than previously. According to three network revenue managers we interviewed, billings increased mainly because of a reduction of billing backlogs and improvements in the processes necessary to collect under the new reasonable charges fee schedule. Nevertheless, VA’s ability to collect was limited by problems such as missed billing opportunities. VA does not know how many dollars remain uncollected because of such limitations. For fiscal year 2002, VA collected $687 million, up 32 percent compared to the $521 million collected during fiscal year 2001. The increased collections reflected VA’s processing a higher volume of bills than it did in the prior fiscal year. VA processed and received payments for over 50 percent more bills in fiscal year 2002 than in fiscal year 2001. VA’s collections grew at a lower percentage rate than the number of paid bills because the average payment per paid bill dropped 18 percent compared to the prior fiscal year. Average payments dropped primarily because a rising proportion of VA’s paid bills were for outpatient care rather than inpatient care. Since the charges for outpatient care were much lower on average, the payment amounts were typically lower as well. VA had difficulties establishing the collections processes to bill under a new fee schedule, processes which were necessary to achieve the increased billing and collections in fiscal year 2002. Although VA anticipated that the shift to reasonable charges would yield higher collections, collections dropped in fiscal year 2000 after implementing the new fee schedule in September 1999. VA attributed that drop to its being unprepared to bill under reasonable charges, particularly because of its lack of proficiency in developing medical documentation and coding to appropriately support a bill. As a result, VA reported that many VA medical centers developed billing backlogs after initially suspending billing for some care. As shown in figure 2, VA’s third-party collections increased in fiscal year 2001—reversing fiscal year 2000’s drop in collections—and increased again in fiscal year 2002. After initially being unprepared in fiscal year 2000 to bill reasonable charges, VA began improving its implementation of the processes necessary to bill and increase its collections. According to VA, by the summer of 2000, facilities had sufficiently implemented processes to move forward with billing under reasonable charges. By the end of fiscal year 2001, VA had submitted 37 percent more bills to insurers than in fiscal year 2000. VA submitted even more in fiscal year 2002, over 8 million bills that constituted a 54 percent increase over the number in fiscal year 2001. VA officials cited various sources for an increased number of bills in fiscal year 2002. Managers we spoke with in three networks—Network 2 (Albany), Network 9 (Nashville), and Network 22 (Long Beach)—mainly attributed the increased billing to reductions in billing backlogs. They also cited an increased number of patients with billable insurance as a factor for the increased billing. In addition, a May 2001 change in the reasonable- charges fee schedule for medical evaluations allowed billing for a facility charge in addition to billing for the professional service charges, a change that contributed to the higher volume of bills in fiscal year 2002. Increased collections for fiscal year 2002 reflected VA’s improved ability to manage the volume and billing processes required to produce multiple bills under reasonable charges, according to three network revenue managers. Networks 2 (Albany) and 9 (Nashville) reduced backlogs, in part by hiring more staff, contracting for staff, or using overtime to process bills and accounts receivable. Network 2 (Albany), for instance, managed an increased billing volume through mandatory overtime for billers. Managers we interviewed in all three networks noted better medical documentation provided by physicians to support billing. In Network 22 (Long Beach) and Network 9 (Nashville), revenue managers reported coders were getting better at identifying all professional services that can be billed under reasonable charges. In addition, the revenue manager in Network 2 (Albany) said that billers’ productivity had risen from 700 to 2,500 bills per month over a 3-year period, as a result of gradually increasing productivity standards and a streamlining of their jobs to focus solely on billing. Studies have suggested that operational problems—missed billing opportunities, billing backlogs, and inadequate pursuit of accounts receivable—limited VA’s collections in the years following the implementation of reasonable charges. After examining activities in fiscal years 2000 and 2001, a VA Inspector General report estimated that VA could have collected over $500 million more than it did. About 73 percent of this uncollected amount was attributed to a backlog of unbilled medical care; most of the rest was attributed to insufficient pursuit of delinquent bills. Another study, examining only professional-service charges in a single network, estimated that $4.1 million out of $4.7 million of potential collections was unbilled for fiscal year 2001. Of that unbilled amount, 63 percent was estimated to be unbillable primarily because of insufficient documentation. In addition, the study found coders often missed services that should have been coded for billing. According to the Director of the Revenue Office, VA could increase collections by working on operational problems. These problems included unpaid accounts receivable and missed billing opportunities due to insufficient identification of insured patients, inadequate documentation to support billing, and coding problems that result in unidentified care. During April through June 2002, three network revenue managers told us about backlogs and processing issues that persisted into fiscal year 2002. For example, although Network 9 (Nashville) had above average increases in collections for both inpatient and outpatient care, it still had coding backlogs in four of six medical centers. According to Network 9’s (Nashville) revenue manager, eliminating the backlogs for outpatient care would increase collections by an estimated $4 million for fiscal year 2002, or 9 percent. Additional increases might come from coding all inpatient professional services, but the revenue manager did not have an estimate because the extent to which coders are capturing all billable services was unknown. Moreover, although all three networks reported that physicians’ documentation was improving for billing, they reported a continuing need to improve physicians’ documentation. In addition, Network 22 (Long Beach) reported that accounts receivable staff had difficulties keeping up with the increased volume of bills because it had not hired additional staff or contracted help for accounts receivable. As a result of these operational limitations, VA lacks a reliable estimate of uncollected dollars, and therefore does not have the basis to assess its systemwide operational effectiveness. Some uncollected dollars resulting from currently missed billing opportunities—such as billable care missed in coding—are not readily quantified. Other uncollected dollars—such as those from backlogged bills and uncollected accounts receivable—are either only partially quantifiable or their potential contribution to total collections is uncertain. For example, even though the uncollected dollars in older accounts receivable can be totaled, the yield in payments through more aggressive pursuit of accounts receivable is uncertain. This is because, according to VA officials, some portion of the billed dollars is not collectable due to VA inappropriately billing for services not covered by the insurance policy, billing against a terminated policy, or not closing out the accounts receivable after an insurer paid the bill. VA continues to implement its 2001 improvement plan and is planning more improvements. Although the improvement plan could potentially improve operations and increase collections, it is not scheduled for full implementation until December 2003. In May 2002, VA created a new office in VHA, the Chief Business Office, in part to address collections issues. According to VA officials, this office is developing a new approach to improvements, which will include initiatives beyond those in the improvement plan. VA’s improvement plan was designed to increase collections by improving and standardizing its collections processes. The plan’s 24 actions are to address known operational problems affecting revenue performance. These problems include unidentified insurance for some patients, insufficient documentation for billing, shortages of coding staff, gaps in the automated capture of billing data, insufficient pursuit of accounts receivable, and uneven performance across collections sites. The plan seeks increased collections through standardization of policy and processes in the context of decentralized management, in which VA’s 21 network directors and their respective medical center directors have responsibility for the collections process. Since management is decentralized, collections procedures can vary across sites. For example, sites’ procedures can specify a different number of days waited until first contacting insurers about unpaid bills and can vary on whether to contact by letter, telephone, or both. The plan intends to create greater process standardization, in part, by requiring certain collections processes, such as the use of electronic medical records to provide coders better access to documentation and legible records. When fully implemented, the plan’s actions can improve collections to the extent that they can reduce operational problems such as missed billing opportunities. For example, two of the plan’s actions—requiring patient contacts prior to scheduled appointments to gather insurance information and electronically linking VA to major insurers to identify patients’ insurance—are intended to increase the number of patients identified with insurance. A recent study estimated that 23.8 percent of VA patients in fiscal year 2001 had billable care, but VA actually billed for the care of only 18.3 percent of patients. This finding suggests that VA could have billed for 30 percent more patients than it actually billed. VA has implemented some of the improvement plan’s 24 actions, which were scheduled for completion at various times through 2003, but is behind the plan’s original schedule. The plan had scheduled 15 of the 24 actions for completion through May 25, 2002, but, as shown in figure 3, VA had designated only 8 as completed, as of the last formal status report on the plan in May 2002. Some of the plan’s actions that VA has designated as completed needed additional work. For example, although VA designated electronic billing as completed in the May 2002 report, in August 2002 a VA official indicated that 20 hospitals were still working on a step required to transmit bills to all payers. In other cases, VA has designated an action completed by mandating it in a memorandum or directive. However, mandating an action in the past has not necessarily ensured its full implementation. For example, although an earlier 1998 directive required patient preregistration, the 2001 improvement plan reported that preregistration was not implemented consistently across VA and thus mandated its use. Officials in VHA’s new Chief Business Office told us that this office is developing a new approach for improving third-party collections. According to the Chief Business Officer, the Under Secretary of Health proposed, and the Secretary approved, the establishment of the Chief Business Office to underscore the importance of revenue, patient eligibility, and enrollment functions; and to give strategic focus to improving these functions. He said the new approach can help increase revenue collections by further revising processes and providing a new business focus on collections. Officials in this office told us that the new approach will combine these and other actions with the actions in the improvement plan. For example, the Chief Business Office’s improvement strategy incorporates electronic transmission of bills and of third-party payments, which are part of the 2001 improvement plan. The new approach also encompasses initiatives beyond the improvement plan, such as the one in the Under Secretary of Health’s May 2002 memorandum that directed all facilities to refer accounts receivable older than 60 days to a collection agency, unless a facility can document a better in-house process. According to the Deputy Chief Business Officer, this initiative has shown some sign of success— with outstanding accounts receivables dropping from $1,378 million to $1,317 million from the end of May to the end of July 2002, a reduction of about $61 million or 4 percent. Another initiative in the new approach is the Patient Financial Services System (PFSS). PFSS is an automated financial system focused on patient accounts, which is intended to overcome operational problems in VA’s current automated billing system. For example, VA’s automated system for clinical information was not designed to provide all of the episode-of-care information, such as the health care provider and diagnoses, that are required for billing. The development of PFSS is tied to a demonstration project of a financial system, which is now being designed for Network 10 (Cincinnati). According to the Deputy Chief Business Officer, VA anticipates awarding a PFSS contract by April 1, 2003. The Chief Business Office’s plan is to install this financial system in other facilities and networks if it is successfully implemented in the Network 10 (Cincinnati) demonstration. The Chief Business Office also intends to improve collections by developing better performance measures, which will be similar to those used in the private sector. For example, the office intends to use the measure of gross days revenue outstanding, which indicates the pace of collections relative to the amount of accounts receivable. During fiscal year 2003, the office plans to hold network and facility directors accountable for collections through standards that are tied to these performance measures. In addition, the Chief Business Officer said that tracking performance with these measures could help identify further opportunities for collections improvements. The Chief Business Office is developing the new initiatives, which it had not formalized into a planning document as of August 2002. Certain key decisions were under consideration. For example, the Chief Business Officer was considering whether to centralize some processes at the network or national level and was developing the performance standards that would be required for holding network and facility directors accountable. Moreover, according to the Chief Business Officer, implementing PFSS will require the office to resolve some issues, including making its existing systems provide sufficient data to support the new financial system. As VA faces increased demand for medical care, third-party collections for nonservice-connected conditions remain an important source of alternative revenue to supplement VA’s resources. Our work and VA’s continuing initiatives to improve collections suggest that VA could collect additional supplemental revenues because it has not collected all third- party payments that it could in fiscal year 2002. However, VA does not have an estimate of the amount of uncollected dollars, which it needs to assess the effectiveness of its current processes. VA has been improving its billing and collecting under the new fee schedule established in 1999, but VA has not completed its efforts to address problems in collections operations. In this regard, fully implementing VA’s 2001 improvement plan could help VA maximize future collections by addressing problems such as missed billing opportunities. However, the plan’s reliance on directives, in some cases, to achieve increased collections is not enough to ensure full implementation and optimal performance. The Chief Business Office’s new approach could also enhance collections. VA’s new Chief Business Office’s challenge is to ensure such performance by identifying root causes of problems in collections operations, providing a focused approach to addressing the root causes, establishing performance measures, and holding responsible parties accountable for achieving the performance standards. However, it is too early to evaluate the extent to which VA will be able to address operational problems and further increase collections by fully implementing its 2001 plan and new approach. The Department of Veterans Affairs provided written comments on a draft of this report, which are found in appendix II. VA generally agreed with our findings that it continues to make improvements in increased collections and VHA’s Business Office is developing new initiatives to further enhance collections. In addition, VA clarified that it may, under limited circumstances, collect from Medicare although generally it may not do so. We changed our report accordingly. VA also suggested that our title was misleading because it stated that VA continues to address problems in its collections operations rather than stating that VA is building infrastructure to implement effective collections operations. We believe that our title is accurate because VA continues to address problems that we and others have identified in VA’s collections operations. VA has acknowledged such problems in the past, including unidentified insurance for some patients, insufficient documentation for billing, and shortages of coding staff. VA continues to implement the 2001 improvement plan that it developed to address these and other problems. VA’s new initiatives also address problems, such as gaps in automated capture of billing data, that have been previously identified. As arranged with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Veterans Affairs, interested congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101. James Musselwhite and Terry Hanford also contributed to this report. To assess the Department of Veterans Affairs’ (VA’s) progress with collections in fiscal year 2002, we obtained and examined data on VA’s third-party bills and collections. We also interviewed officials in VA headquarters and in three VA health care networks to understand the reasons for the increased collections compared to fiscal year 2001 as well as any operational problems. To provide information on VA’s 2001 improvement plan and its emerging new approach to improvements, we reviewed relevant VA documents and interviewed VA officials. We conducted interviews from April through June 2002 with managers in three networks concerning collections. In addition, we gathered data on third-party bills and collections for fiscal years 2001 and 2002. Our review of the improvement plan’s implementation started with its completion status in March 2002 and ended with its status through May 2002, which was the date of VA’s last formal report on the plan’s status. An official in the Chief Business Office told us in September 2002 that a new status report for the 2001 plan was planned but not yet available. During August through October 2002, we gathered information from officials in the Chief Business Office about additional improvement initiatives. To better understand the increased collections in fiscal year 2002 and any limitations to those collections, we judgmentally selected three networks for more detailed study. These networks provided different examples of above- and below-average growth in collections, considered separately for inpatient care and outpatient care. (See table 1.) Based on available data at the time of selection, Network 9 (Nashville) had above-average collections increases for both inpatient care and outpatient care bills. Network 2 (Albany) exceeded average collections increases for only outpatient care bills, whereas Network 22 (Long Beach) had above-average collections increases for only inpatient care bills. Although we did not verify VA data on collections and bills, we used the data reported by VA’s Revenue Office in its analyses of third-party collections. The data source is VA’s Veterans Health Information Systems and Technology Architecture National Database. This database includes data for collections from various sources—including third-party payments, patient copayments, and proceeds from sharing agreements in which VA sells services to the Department of Defense and other providers. | The Department of Veterans Affairs (VA) collects health insurance payments, known as third-party collections, for veterans' health care conditions it treats that are not a result of injuries or illnesses incurred or aggravated during military service. In September 1999, VA adopted a new fee schedule, called "reasonable charges," that it anticipated would increase revenues from third-party collections. In 2001, GAO testified that problems in VA's collections operations diminished VA's collections. For this report, GAO was asked to examine VA's third-party collections and problems in collections operations for fiscal year 2002 as well as its initiatives to improve collections. VA's fiscal year 2002 third-party collections rose by 32 percent, continuing an upward trend that began in fiscal year 2001. The increase in collections reflected VA's improved ability to manage the larger billing volume and more itemized bills required under its new fee schedule. Billings increased mainly due to a reduction of billing backlogs and improved collections processes--such as better medical documentation prepared by physicians, more complete identification of billable care by coders, and more bills prepared per biller--according to VA managers in three regional health care networks. However, VA continues to address operational problems, such as missed billing opportunities, that limit the amount VA collects. To address operational problems and further increase collections, VA has several initiatives under way and is developing additional ones. VA has been implementing initiatives in its 2001 improvement plan that was designed to address operational problems, such as unidentified insurance for some patients, insufficient documentation of services for billing, shortages of coding staff, and insufficient pursuit of accounts receivable. VA's last formal status report in May 2002 designated only 8 of the plan's 15 initiatives scheduled for completion by that time as having been completed. VA continues implementation of this plan and is also developing new initiatives, such as an automated financial system to better serve billing needs. It is too early to evaluate the extent to which VA's full implementation of its 2001 plan and new initiatives will be able to address operational problems and further increase third-party collections. In commenting on a draft of this report, VA generally agreed with our findings. |
BRAC 2005 was the fifth round of decisions designed to streamline the nation’s defense infrastructure. Unlike past BRAC rounds, which have generally focused on reducing excess physical infrastructure, this round also presents military growth challenges for DOD, states, and local governments. Its implementation will increase the numbers of on-base personnel, military families, and defense-related contractors at and near 18 military bases. Furthermore, because the BRAC realignments must, by law, be completed by September 15, 2011, these community changes will be rapid, as personnel will arrive quickly once the bases are readied. Figure 1 shows the 18 bases where BRAC growth will affect neighboring communities. Other military growth communities exist, but their growth is not a result of BRAC. While BRAC 2005 is taking place, other major initiatives will increase growth at and near some BRAC-affected bases. These include two major military reorganizations. First, the Global Defense Posture Realignment initiative will move about 70,000 military and civilian personnel from overseas to U.S. bases by 2011 to better support current strategies and address emerging threats. Second, the Army’s force modularity effort will restructure the Army from a division-based force to a more readily deployable modular, brigade-based force. Some of these brigade units will relocate to other existing bases. A third initiative, Grow the Force, is not a reorganization but will increase the permanent strength of the military to enhance overall U.S. forces. This initiative will add about 74,000 soldiers and about 27,000 marines. Finally, troop drawdowns from Iraq could increase personnel numbers at some BRAC-affected bases. These other military initiatives will also be implemented over a longer time frame than BRAC decisions, which are scheduled to be completed in 2011. Though not a major force initiative, DOD’s enhanced use lease (EUL) activities will also affect growth and development in military communities. EULs allow the military to lease its land to private developers to build offices and other facilities that generate operating income for the military. In some cases, the growth from EUL activities may exceed the BRAC- related growth. For example, the EUL at Fort Meade, which is planned to include up to 2 million square feet of office space, could house up to 10,000 new workers by 2013. This EUL activity will generate more new jobs in the Fort Meade area than the 6,600 additional military and civilian DOD personnel attributable to BRAC. Because all these initiatives are taking place at the same time, the forces driving growth at military bases and the surrounding communities are more complex than they would be if they were the result of BRAC decisions alone. As table 1 indicates, six of the eight bases we visited expect to be affected by various defense initiatives in addition to BRAC. During fiscal years 2006 through 2012, the populations of the communities in the vicinity of the 18 BRAC bases identified in figure 1 are expected to increase by an estimated 181,800 military and civilian personnel, plus an estimated 173,200 dependents, for a total increase of about 355,000 persons, as shown in table 2. At two bases, Fort Bliss and Fort Belvoir, DOD has estimated that the on-base populations alone will more than double. In addition, defense-related contractors who follow and settle near the relocated commands will compound the growth and traffic near some bases, and the impact of these contractor relocations is not reflected in the military growth figures. For example, at Fort Meade, Maryland, DOD has estimated that an additional 10,000 contractor personnel may relocate near to or on the base. OEA is DOD’s primary source for assisting communities adversely affected by defense program changes, including base closures or realignments. OEA provides guidance and assistance to growth communities through growth management planning grants, guidance, and expertise to help communities with significantly adverse consequences as a result of BRAC decisions. OEA has identified those communities that are expected to be impacted by BRAC-related growth and that have expressed a need for planning assistance. As part of this assistance, OEA has provided support to communities to hire planners or consultants to perform studies identifying infrastructure needs created by military growth. Additionally, DOD’s Defense Access Roads (DAR) Program may allow Military Construction funds to help address highway needs created by military activities. The focus of DAR is not typical traffic growth, which should be addressed through normal federal, state, and local transportation programs, but rather unusual changes and military necessity. National security is one of the explicit goals of the Federal-Aid Highway Program; however, DOT does not have special programs to deal with military growth. Nevertheless, many federal transportation grant programs provide state and local governments with funding that they can use to help address BRAC-related transportation challenges. The Federal-Aid Highway program consists of seven core formula grant programs and several smaller formula and discretionary grant programs. Broad flexibility provisions allow for states to transfer funds between core programs and also to eligible transit projects. Federal capital transit programs include formula grants to transit agencies and states. Additionally, transit capital investment grants provide discretionary funds for the construction and extension of fixed-guideway systems such as rail or bus rapid transit lines. Federal transportation programs also require states to set their own priorities for addressing transportation needs. Traffic growth impacts can be analyzed by the effect of the addition of automobiles on traffic flow. Generally, traffic flow on roadways is measured by “level of service,” a qualitative grading system. The Transportation Research Board defines service levels for roadways using “A through F” grades. Service level “A” defines roadways with no delays and unimpeded traffic flow at posted speed limits. Service level “F” is defined as a failing service level and describes roadways with traffic conditions that most drivers consider to be unacceptable. Drivers on these roadways experience long delays and poor to nonexistent traffic flow. Even small increases in traffic can have a large impact when roads are already congested. Affected communities expect BRAC and other military growth initiatives to have a significant impact on local transportation. In response to an OEA survey, nearly all BRAC growth communities identified transportation as a top growth challenge. Transportation studies done in communities of varying size show how BRAC-related growth is expected to result in a deterioration of traffic conditions. Affected communities identified about $2 billion in expected costs for transportation projects that they consider needed to address military growth in the near term, before the September 2011 deadline. The costs of longer-term projects to address the impact of military growth on transportation in these communities beyond the BRAC deadline are uncertain. Many communities affected by BRAC growth recognize that changes resulting from that growth will place additional demands on their transportation systems. In 2007, OEA asked growth communities, including 18 current BRAC growth communities, to determine which of the problems they would face as a result of military growth would create the greatest challenges. Of 18 current BRAC-growth communities, 17 identified transportation as one of their top three priorities. These 17 communities ranged in size from very large metropolitan areas to relatively small communities, and the extent of the impact depended in part on the size of the affected community. Some BRAC growth bases are located in metropolitan areas with populations of well over 1 million. In these areas, the military growth may be small relative to the community’s total population, but the community nevertheless anticipates localized effects on already congested urban roadways. At the National Naval Medical Center, for example—a BRAC growth base located in Bethesda, Maryland, a densely populated Washington suburb—a planned consolidation with Walter Reed Army Medical Center, located in Washington, D.C., will create additional traffic not only from 2,500 additional hospital employees, but also from patients and visitors, resulting in an about 1,900 estimated additional trips to the hospital campus per day. While small compared to the regional population, these additional employees, patients, and visitors will travel to the base using either the Washington Metrorail system or by bus or auto on an already congested roadway system. The medical center is located near two major arterial roads, two state highways, and an Interstate highway (I-495, the Capitol Beltway). It is also located across from the National Institutes of Health, where over 18,000 personnel are employed. According to Maryland transportation planners, the additional traffic resulting from the BRAC action will lead to further deterioration of traffic conditions in the area. Specifically, without intersection improvements, the number of intersections with failing conditions is projected to increase from three to five. In addition, traffic conditions may deteriorate at 10 other intersections, but not to the point of failure. Traffic analyses done for DOD as part of an environmental impact statement (EIS) reviewed 27 major intersections in the vicinity and estimated that with no improvements, the increases in traffic would result in failing or deteriorating service levels at 15 of those intersections during peak periods, compared with current conditions. Such declining service levels mean significant delays will occur, likely increasing base employees’ and others’ commute times. Fort Belvoir is located in Fairfax County, Virginia, where employment and development have grown rapidly and transportation improvements have not kept pace with growth. The planned net addition of 24,100 personnel at the base will increase congestion on the already congested Interstate highway (I-95). Local planners anticipate additional BRAC-related congestion on a number of other nearby Interstate, federal, and local highways (I-395, I-495, U.S. Route 1, and the Fairfax County Parkway). The physical layout of Fort Belvoir also complicates commuter access, in that the base is situated on two major land parcels—the main post and the Engineer Proving Ground—separated by a busy highway (see fig. 2). In addition, gate and road closures after the September 11, 2001, terrorist attacks have already concentrated traffic near the base. The BRAC Fort Belvoir EIS estimated that, with the planned increase in personnel, the number of failing intersections near the base would increase from 2 to 6 during the morning peak period, and the level of service would deteriorate by at least one level at 13 intersections. Traffic and development density problems at Fort Belvoir identified during the environmental review process were so severe that DOD decided to acquire and develop an additional site, at an estimated cost of $1.2 billion, to accommodate about 6,400 employees of DOD’s Washington Headquarters Services and additional organizations. DOD officials told us, for example, that they would have had to construct a parking structure separate from the potential office site on the other side of U.S. Route 1, as well as an additional pedestrian bridge structure across the highway, estimated to cost $90 million. Army officials also determined that the existing Engineer Proving Ground location at Fort Belvoir was not large enough to accommodate office space and parking for so many additional personnel. However, even with the acquisition of the new site, congestion will grow on roadways near the current base, and local officials estimate that initial transportation improvements to address the impact of growth, including an additional access ramp to Interstate 95, could cost as muc $458 million. Over the longer term, state and local officials expect the costs of transportation improvements to address congestion to be much higher. Fort Meade, Maryland, located in the corridor between Washington, D. and Baltimore, is also located in a region of significant growth. Traffic delays are already prevalent at many intersections near the base, where drivers have few roadway alternatives, and county officials expect the growth at Fort Meade to exacerbate these conditions. Given the planni cycle for major highway construction and the state’s large backlog of transportation projects, the state will likely be precluded from addressing ded these needs before BRAC 2005 actions are completed. The EIS conclu that significant adverse effects on area roadways would be expected during and after 2011. For example, it concluded that the growth at Fort ections of road near Meade would cause failing traffic conditions on 12 s the base, potentially resulting in significant delays. The effects of BRAC decisions, however, cannot be isolated from the effects of other transportation challenges that the region around Fort Meade will face, especially the challenges resulting from the construction of an EUL facility at the base. This facility is designed to include abou million square feet of office space and could house up to 10,000 new workers by 2013. EUL activities could generate more new jobs in the Fort Meade area than the military growth initiatives that are scheduled to bri about 7,000 additional military and civilian DOD personnel to the area. Although Maryland transportation planners have not separately estimated the effects of BRAC and the EUL on transportation, they said that the EUL C decision is planned to be constructed at about the same time as the BRA is to be implemented and they expected the EUL to contribute significantly to the new traffic. Finally, Aberdeen Proving Grounds, Maryland, consists of about 72,000 acres—including 33,000 acres of water—primarily within Harford Coun tern Maryland, north of Baltimore. The base is located on the northwes shore of Chesapeake Bay, and most of the base is located on two peninsulas—one to the north and one to the south. The number of milit and civilian personnel working at the base is scheduled to increase by about 3,400 through 2012. According to Army officials, the Army also has ary entered into an EUL agreement with a developer to build up to 3 mill square feet of office space within the base for up to 3,000 additional workers. Transportation planners expect this growth to aggravate traffic conditions on area roadways, which include a major Interstate highw ay, federal and state highways, and county roads. For example, the EIS completed for this base examined 17 off-post intersections and foun without improvements to roadways and greater use of bus and rail systems by base personnel, levels of service would deteriorate at seven intersections near the base and would fail at three intersections. At thetime of the EIS, none of these intersections had failing service levels. Military growth may also affect transportation in metropolitan areas with populations of less than 1 million. While the additional traffic may causecongestion, these communities generally do not face the same physical constraints as the largest metropolitan areas. Military growth bases may be located in or adjacent to these areas, but also extend far outside th built-up urban sections. Colorado Springs, Colorado, bordering Fort Carson, and El Paso, Texas, borde before the BRAC 2005 decisions. ring Fort Bliss, were growing rapidly Fort Carson is located to the south of Colorado Springs, and Interstate 25 two state highways, and a major county road are the major routes to the base. In Colorado Springs, a study by the Pikes Peak Area Council of Governments found that traffic around Fort Carson will increase by at least 20 percent over 2005 levels by 2015, largely because of an influx of about 24,800 troops and dependents. Fort Carson officials estimate that over 24,000 vehicles will pass through one major base gate every day by 2012, an increase of about 150 percent or 14,600 additional vehicles per day. Vehicles must approach the gate from a highway interchange where traffic is already congested. Local officials are concerned that the increased traffic near the gate and at the interchange will lead to more accidents. In El Paso, Texas, where Fort Bliss is located, officials identified a need for new roads to address mobility problems in the rapidly growing region, including increased congestion on I-10, the only Interstate highway se the city. BRAC and other military growth initiatives will bring alm ost 70,000 additional military personnel and dependents to the base, significantly increasing El Paso’s population. Local officials expect that many of the new personnel at Fort Bliss who will live off-base will ch to live in east and northeast El Paso. To accommodate the expected increases in traffic on roadways connecting east and northeast El Paso and Fort Bliss, the state of Texas worked with a private developer to oose construct a 7.4 mile roadway—Spur 601—connecting east and northeast El Paso to the base. State and local officials expect the new roadway to provide base personnel with easy access to base gates and reduce congestion for all commuters in the vicinity. Military growth may also affect transportation in less heavily populated communities. Here, road networks are less extensive than road networks in metropolitan areas, forcing the additional traffic onto roadways su two lane rural roads not always designed for higher traffic levels. In addition, smaller urban areas affected by BRAC growth are also less li to have transit options— rail transit is generally not available and bus transit can be limited. For example, in Radcliff, Kentucky, the community adjacent to Fort Knox one highway serves the community’s business district and also provides access to all three gates at the base. As many as 48,000 vehicles travel ov portions of this road between Elizabethtown, Kentucky, and Fort Knox each day, causing traffic congestion. In addition, some military and civilia personnel at Fort Knox commute to the base using two-lane rural road Even though Fort Knox expects to see a net reduction of about 2,900 personnel, changing demographics at the base will greatly increase congestion on the main highway. For example, as part of BRAC 2005, For Knox will lose military trainees who live and largely remain on-base, but gain civilian employees who will live off-base, along with their dependen A 2007 study of traffic conditions near Fort Knox performed for a local metropolitan planning organization concluded that without significa improvements, the existing roadway system would be incapable of nt providing the capacity required to accommodate traffic increases caused by the change in personnel at the base. The study also concluded that the BRAC personnel changes would cause travel conditions on the roadway deteriorate greatly. Furthermore, while Radcliff, Kentucky, has a tra nsit provider—a social agency offering dial-a-ride and vanpool services including vanpools to the base—this provider does not offer regularly scheduled bus service. According to transit agency officials, the provider hopes to move toward regular service that could transport commuters to the base. Conditions at Radcliff, Kentucky, illustrate how growth can have a more severe impact on tra ffic than the change in the net number of base personnel would indicate. s. ts. Similarly, at Eglin Air Force Base, a limited roadway network serving the 724 square-mile facility channels traffic along relatively few major roads and causes congestion. The base spans three counties in northwest Florida, and some communities along the coast are constricted by the ba se (see fig. 3). According to local officials, improving transportation is the main growth-related challenge facing communities near Eglin Air Force Base. Local and regional transportation studies have focused primarily on the impact of growth on the major roadways that accommodate most of the traffic in the area and serve as hurricane evacuation routes for area residents. Three main roads traverse the base from north to south. One major road, bracketed by the base and the Gulf of Mexico, runs east to west along the base’s southern boundary. With the planned increas dary. With the planned increas e of e of 3,600 personnel and without transportation improvements, traffic 3,600 personnel and without transportation improvements, traffic conditions will decline during peak traffic hours, with fai conditions will decline during peak traffic hours, with failing levels of service projected at 17 locations, compared with 9 now. ling levels of service projected at 17 locations, compared with 9 now. Using community estimates, OEA projected that the cost of addressing the most immediate effects of military growth on transportation in the affected communities would be about $2.0 billion. This estimate includes transportation projects that had to meet four criteria: the project had to (1) be clearly and substantially linked to military growth, (2) have detailed cost estimates and funding sources that were specific and could be validated, (3) have a demonstrated gap in funding, and (4) be essential to prepare for military growth by September 2011. Many projects were largely designed to improve intersections and to widen and extend roadways near growth bases. Over half of these costs are for transportation improvements concentrated near three bases in the metropolitan Washington, D.C., area—Bethesda National Naval Medical Center, Fort Belvoir, and Fort Meade. Communities near these three bases have identified 11 critical transportation projects estimated to cost over $1.1 billion. The impact of military growth on transportation could be greater than the affected communities have estimated thus far, and the costs of projects to address those impacts are still uncertain for several reasons. First, some potential projects are not included in the $2.0 billion estimate, and, if built, will result in additional costs beyond the $2.0 billion estimate. Texas Department of Transportation officials told us they had identified additional projects designed, at least in part, to address military growth, which they estimate will cost about $327 million. However, according to El Paso officials, the community is able to fund the projects and, although the number of military personnel arriving in El Paso is very substantial, it is not a large percentage of the existing community’s population. In some cases uncertainty remains regarding the transportation impacts. For example, officials at growth-affected communities near Camp Lejeune, North Carolina, were still identifying what levels of growth would occur and the impact of military growth on transportation. Additionally, some communities were unsure where arriving personnel and contractors would choose to live. For example, officials from Fort Belvoir were unsure how many personnel would relocate near the base, and officials at Fort Knox did not know if some new personnel would choose to commute from the Louisville area. Finally, many communities anticipate future growth anyway, and it is not always clear whether its impact on transportation is clearly and substantially linked to military growth. Studies and other evidence clearly linking projects to military growth are not always available. For example, OEA officials told us they have no evidence available to link three costly potential longer-term projects to military growth. These three projects, which are not included in the $2.0 billion estimate and which OEA officials said are among the four costliest unfunded longer-term projects that affected communities identified, are estimated to cost a total of about $1.6 billion and include expanding public transit in the Washington, D.C., area. OEA officials expect to complete an updated assessment of military growth projects, costs, and funding needs in late 2009. The federal response to the expected impact of military growth on transportation includes helping with planning, estimating project costs, and providing some funding for projects. Both DOD and DOT have programs that can help states and localities; however, projects to address the impact of military growth must compete with other projects for funding. State and local officials are prioritizing highway projects that can be completed with existing funding and identifying alternative transportation approaches, such as transit and biking, to help address the growth expected in their communities. OEA is DOD’s primary source of assistance for communities adversely affected by Defense program changes. OEA provides technical and financial assistance to help communities address adverse consequences of BRAC decisions. However, as we have previously reported, OEA is not at an appropriate organizational level within DOD to coordinate the assistance from multiple federal and other government agencies that affected communities need. Accordingly, we recommended that DOD provide high-level agency leadership to ensure interagency and intergovernmental coordination. DOD agreed with this recommendation. OEA has funded local coordinator positions to assist in coordinating local activities responding to BRAC, including transportation-related activities. For example, Harford County, Maryland, established a BRAC Planning Commission for Aberdeen Proving Ground. This Commission, with OEA funding, helped establish the Chesapeake Science and Security Corridor Consortium, which includes eight jurisdictions in three states—Delaware, Maryland, and Pennsylvania. With Harford County as the lead agency, the Chesapeake Science and Security Corridor Regional BRAC Office administer grants and coordinates regional BRAC responses. OEA also has funded studies, such as traffic studies, which help states and local communities define the impact of military growth on transportation. For example, OEA has provided transportation planning grants to Maryland and Virginia. According to local officials, OEA also has funded transportation studies for communities near several of the bases we visited, including those near Eglin Air Force Base and Fort Knox. These studies can provide communities with more detailed, precise information about the transportation impact of military growth than the initial environmental studies performed by DOD. Under the DAR program, administered by the Military Surface Deployment and Distribution Command (SDDC), DOD may pay for public highway improvements needed to address the impact on traffic of sudden or unusual defense-related actions. DAR enables DOD to help pay indirectly for improvements to highways DOD designates as important to the national defense. Under DAR, DOD can use funds provided in military construction appropriations to pay for all or part of the cost of constructing and maintaining roads designated as “defense access roads.” However, proposals for funding these roads must compete with proposals for funding all other military construction projects, and projects must meet specific criteria. Local government and military base officials we interviewed said they considered DAR funding difficult to obtain because of the program’s narrow eligibility criteria. For example, if a road is already heavily used or congested, traffic may not double as a result of military growth even though traffic may increase significantly. In addition, the DAR criteria do not specifically refer to transit-related improvements. The DAR program has not funded large numbers of defense access road projects. From 2000 to 2009, the program received applications to certify of 27 projects. Of those, 17 have been certified and funded, 6 have been certified and are pursuing funding, 3 are currently being evaluated for certification, and 1 did not met the funding criteria. Since 2005, the program has provided about $22 million annually for transportation improvements, including projects that are not BRAC-related. In 2008, we reported that for 11 bases whose populations were scheduled to increase by at least 25 percent, DOD had certified and requested funding for one DAR project—$36.0 million for access ramps and a parkway at Fort Belvoir, Virginia. Since that time, DOD has approved and provided funds for additional projects at two BRAC growth bases: $8.3 million for access roads at Fort Carson, Colorado, and $21.8 million for a road-widening project at Fort Bragg, North Carolina. In October 2008, DOD reported to the Senate Committee on Armed Services addressing DAR criteria. The report concluded that the current DAR criteria provide flexibility for addressing communities’ concerns about the impact of traffic. However, the report also recognized the difficulty in linking safety issues to the criteria and acknowledged that the impact of DOD growth on safety is a particular concern. Consequently, DOD was considering expanding or modifying the criteria to make projects eligible for DAR certification when population growth at a base increases traffic congestion to the point that it presents a public safety risk. DOD directed SDDC to provide by December 2009 an independent study on the merits of specific criteria to address safety issues related to growth. The study will be coordinated with DOT. DOT does not have special programs to address BRAC growth. However, a number of existing federal transportation grant programs provide funding that state and local governments can use to help address BRAC-related transportation challenges. Federal laws and requirements specify an overall approach for transportation planning agencies to use in planning and selecting projects for federal funding. Under this process, localities— acting through metropolitan planning organizations—and states develop long-range plans and short-range programs to identify transportation needs and projects. BRAC-related projects must be incorporated into metropolitan area long-range transportation plans and transportation improvement programs—for improvements located in metropolitan area— as well as state transportation improvement programs, before federal funding may be used. Decisions about which projects are to be funded take place at the state and local level. As a result, BRAC-related projects must compete with other state, regional, and local transportation priorities. Because of the short BRAC growth time frames, communities near the affected bases have estimated that they have less funding than they need for critical, short-term, growth-related transportation projects. According to our analysis of the data 17 growth communities provided to OEA, these communities had identified, as of August 2008, sources for about $0.5 billion of the $2.0 billion they indicated they would need for 46 short-term transportation projects. Transportation projects constituted about 93 percent of the short-term infrastructure funding needs identified by communities. Since February 2009, the American Recovery and Reinvestment Act of 2009 (the Recovery Act) has provided additional funding for transportation projects. Recovery Act funds may be used for BRAC-related projects, but the projects already need to be advanced in the normal development cycle, because these funds must be obligated very quickly or states risk losing them. The act requires that DOT obligate for each state, by June 30, 2009, 50 percent of the highway funds made available to each state, and 100 percent of these funds by March 1, 2010. If these requirements are not met for a state, the unobligated funds are to be redistributed to other states. Thus, even though BRAC transportation projects ideally should be completed more quickly than typical highway projects, the time frames for using Recovery Act funds may be too short for some BRAC projects. However, states are using Recovery Act funds for BRAC-related transportation projects at two of the eight bases we visited—Eglin Air Force Base and Fort Belvoir. Florida is using $46 million in Recovery Act funds for an intersection grade separation project near Eglin Air Force Base and Virginia is using about $60 million in Recovery Act funds for its Fairfax County Parkway project. Texas and Maryland officials did not report applying Recovery Act funds for any of the 46 transportation projects OEA officials identified as related to military growth. However, they reasoned that applying Recovery Act funds for highway projects or to transit agencies generally could help improve mobility in the region. DOT is continuing to obligate Recovery Act funds, and the total amount of these funds that ultimately will be used to respond to BRAC transportation needs is not known at this time. According to community and state transportation planners, communities that will be affected by BRAC growth will often not be able to complete major transportation projects designed to address that growth before it occurs. The BRAC growth time frame is shorter than the average time frame for developing significant new infrastructure projects. As noted, legislation mandates that BRAC actions be completed by September 2011, 6 years from the date the President submitted his approval of the recommendations to Congress. According to the Maryland Department of Transportation, major roadway improvement and construction projects typically take 10 to 15 years to plan, fund, design, and construct. As shown in table 3, Federal Highway Administration data suggest similar time frames for completing major highway construction projects. Some state and local governments have encountered difficulties in responding to transportation needs before the BRAC moves take place. Kentucky state and local governments will not complete a key “connector” road designed to alleviate traffic near Fort Knox until 2013—2 years after the deadline for completing the BRAC realignment. Texas state and local government officials do not expect to finish widening a major road to better accommodate increased traffic on the perimeter of Fort Bliss or constructing a new freeway allowing traffic to more directly access the base until at least 4 years after growth at the base occurs. In commenting on a draft of this report, the Federal Transit Administration (FTA) observed that transit operational improvements such as increasing the frequency of service can be implemented in less time than is required for construction of new transportation facilities. In addition, Urbanized Area Formula grants administered by the FTA can be used for near-term service extensions as a stopgap measure to meet a surge in demand, but not as an alternative to a long-term capital project. Given the estimated shortfall in affected communities’ funding for critical near-term projects and the difficulties posed by the Recovery Act’s short obligation time frames, local officials are adopting various strategies to complete some projects before the BRAC 2005 implementation deadline. In particular, officials are reprioritizing planned projects, assigning higher priorities to projects that will help mitigate the impact of BRAC growth on transportation, and immediately implementing projects that they can complete before or during BRAC growth. Three Maryland bases—Aberdeen Proving Grounds, Fort Meade, and the Bethesda National Naval Medical Center—are expected to grow by over 12,000 personnel as a result of BRAC. These three bases are located within large metropolitan areas. Officials expect the growth to have a severe impact on intersections and roadways near all three bases. State government in Maryland has taken the lead role in responding to BRAC growth within the state. For example, the governor created a BRAC subcabinet, which coordinates the responses of several state agencies, including the Maryland Department of Transportation (MDOT). In addition, MDOT has responded to time and funding constraints for addressing the impact of growth at the three bases by implementing a strategy to identify lower-cost improvements for immediate implementation while continuing to plan higher-dollar, higher-capacity projects that take longer to plan, engineer, and construct. MDOT officials consider improvements to key intersections near the three bases as critical short-term BRAC projects but are concerned that the improvements may not be completed before growth occurs. State and local transportation officials determined the potential impact of military growth on traffic at the three bases within the next 5 to 7 years and identified 58 intersections where they expect traffic conditions to fail during that time because of this growth. In addition, the officials identified intersection improvements, such as additional turn lanes and other minor projects, to maintain acceptable traffic conditions near the bases in the short term. MDOT prioritized these improvements based on level of service, cost of improvements, environmental and socio-economic impact, and proximity to the bases, giving highest priority to improvements at 16 intersections. State and local government officials said they plan to fund and complete these improvements but are uncertain whether they will have sufficient funds to do so. For example, the state has programmed $31.6 million for improvements to six intersections near Fort Meade, but another $65 million to $100 million may be needed to complete the projects; $31.9 million for improvements to six intersections near Aberdeen Proving Ground, but $90 million to $155 million more may be needed to complete the projects; and $31.3 million for improvements to four intersections near Bethesda National Naval Medical Center, but $160 million to $215 million more may be needed to complete the projects. These shortfalls reflect a broader difficulty in funding Maryland’s transportation capital program. The state has deferred over $2.2 billion in transportation projects as transportation revenues have declined. Partially offsetting this shortfall is $610 million in Recovery Act funds for highways and transit. However, according to an MDOT official, Recovery Act funds are not a good fit for the BRAC-related intersection improvements because the projects are not ready for funds to be obligated, and the Recovery Act has tight obligation deadlines for highway and transit funds. MDOT also initiated evaluations of how direct commuter and local bus and shuttle services could be expanded to help accommodate growth at the three bases. Furthermore, according to an MDOT official, MDOT is exploring the possibility of obtaining a discretionary grant under the Recovery Act for a maintenance and storage facility to help support and grow local bus service to the Fort Meade area. MDOT officials are also exploring other short-term projects to address the growth, including bicycle and pedestrian path improvements, better access to transit systems, and efforts to promote car- and vanpools, teleworking, and transit systems. MDOT’s long-term projects to address growth at the bases include rail improvements. Maryland officials had identified these projects before the 2005 BRAC decisions to address regional growth, but the projects are also needed to improve access to the bases, since growth will create additional demand for rail and transit services. State officials plan to invest $201.3 million from 2008 through 2013 to increase capacity and improve service on the Maryland Area Regional Commuter (MARC) system statewide. Finally, a key project for addressing the transportation impact of growth at Bethesda National Naval Medical Center is improved access to the Medical Center Metrorail station. Roads in this community are already at or near capacity, and with no room for significant roadway expansion; local and state officials expect a significant portion of the commuters to use the Metrorail system. The Washington Metropolitan Area Transit Authority has studied five alternatives, including improving the existing street crossing, two pedestrian tunnel designs, a pedestrian bridge design and a new elevator entrance. Cost estimates for these options varied from $700,000 for the improving the existing crossing to $59.4 million for the elevator entrance option. A preferred alternative has not been selected. Maryland state officials told us that they are working with transit authority officials to plan the project. In May 2008, Bethesda National Naval Medical Center officials requested that DOD provide $21 million for the project through the DAR program. As discussed, Fort Belvoir will gain about 24,100 military and civilian personnel. Fairfax County, where Fort Belvoir is located, is within the Washington, D.C., metropolitan area—one of the most congested transportation regions in the nation. Because of traffic and other development issues at Fort Belvoir, the Army acquired additional property for the base in Alexandria, Virginia, and 6,400 of the new personnel will re- locate there. State and local officials also identified and addressed their highest-priority transportation projects immediately while recognizing that longer-term projects may not be completed before BRAC growth occurs at Fort Belvoir. In total, the officials estimated $390 million in costs for five short- term projects that they consider critical for responding to BRAC growth at Fort Belvoir. In addition, they identified about $1.6 billion in costs for short-term and longer-term projects not included in the $2 billion estimate of nationwide project costs. Virginia has thus far allocated about $96 million in Recovery Act funds to BRAC-related projects. Of this sum, the Virginia Department of Transportation (VDOT) has allocated about $60 million to extend the Fairfax County Parkway near Fort Belvoir. This Recovery Act funding, together with funding from other sources, has enabled VDOT to allocate the estimated $175 million needed to complete this road. However, VDOT has not been able to obtain any of the estimated $165 million needed to complete the two other short-term projects near the base—constructing a traffic interchange and widening Interstate 95. In Virginia, as in Maryland, transportation revenues have fallen. Specifically, the projected funding for projects listed in Virginia’s 6-year transportation improvement plan has declined by almost 40 percent since 2007. According to VDOT officials, this decrease in projected funding is mainly due to a 2007 Virginia Supreme Court decision disallowing the Northern Virginia Transportation Authority’s imposition of taxes and user fees to obtain revenue for transportation projects. In addition to highways, several transit systems serve Fairfax County, including the Washington Metropolitan Area Transit Authority bus and Metrorail, Fairfax County bus services, and Virginia Rail Express. However, transit access to the base itself is limited, and there is no rail connection. Likewise, the new base location in Alexandria does not have a direct rail connection. Some local officials see an extension of Metrorail to the Fort Belvoir area as a way to address the transportation impact of growth near the base. About 10,400 Army personnel, plus an additional 14,400 dependents, were expected to relocate to Fort Carson. However, a June 2009 DOD decision not to locate a combat brigade there will lower this estimate. Fort Carson is located in El Paso County, Colorado, adjacent to the city of Colorado Springs. Colorado state and local officials expect the growth to have a significant impact on traffic conditions throughout El Paso County and in adjacent counties. After learning about planned BRAC-related military, civilian, and contractor personnel increases at Fort Carson, local transportation officials reprioritized their planned transportation projects during 2006 and 2007. This reprioritization allowed them to include projects designed to address the impact of military growth among their planned short-term projects. Although state and local officials have completed two key projects, they lack sufficient funding to complete other growth-related projects before the growth occurs. State and local officials used a combination of state and local funds to complete needed improvements to Interstate and state highways and to a major roadway near the base. However, local transportation officials estimate that additional projects designed to address the impact of military growth could cost as much as $1 billion. The officials told us that although they have made BRAC growth-related projects a priority, additional projects will not be completed before September 2011 because of funding constraints. Local transportation agencies obtain their funding mainly from sales and fuel tax receipts, and local officials noted that these tax receipts are declining. The officials also told us that the fiscal year 2010 state transportation budget could be reduced by over $400 million from the fiscal year 2009 funding level, further reducing the funding available for projects designed to address the growth at Fort Carson. The officials told us that, should the fiscal year 2010 funding be reduced, the state’s transportation funding would be at its lowest level in 10 years. Officials for Mountain Metro Transit, the transit services provider for Colorado Springs, told us that their agency does not provide service inside the gates at Fort Carson. They stated that most buildings at the base are not within a reasonable walking distance from the entrance and exit gates and that providing transit service would necessitate creating an on-base shuttle system from the gates to several buildings on base. City and transit officials told us that funding for transit services could be cut by 10 percent, further limiting the agency’s ability to address the transportation effects of growth. In addition, Fort Carson officials told us that demand for transit services is low among base personnel. As a result of the BRAC 2005 legislation and other initiatives, about 28,000 personnel were to relocate to Fort Bliss in El Paso County, Texas, by 2011. However, a June 2009 DOD decision not to relocate a combat brigade there will lower this number. State and local officials expect the growth to adversely affect conditions on local roadways and transit systems. However, the officials added that they do not consider the impact of military growth to be significant because the additional personnel represent a small percentage of the city’s total population of about 750,000. Local officials have identified 31 road projects and four transit projects that will help address the impact of military growth at Fort Bliss. According to their estimate, the total cost of these projects will be between $623 million and $830 million. The officials told us that they are capable of funding most of these projects within 5 years. They added that most of the projects that will address the impact of military growth will also address nonmilitary growth and were planned before the decisions to increase personnel at Fort Bliss. However, they told us that they will not be able to complete a major road-widening project until at least 4 years after the growth occurs. Officials in Texas used an innovative financing approach to generate funding sufficient to complete a critical BRAC growth-related project within a short time frame. This approach, which El Paso city officials worked on with Texas Department of Transportation officials, will provide funding to construct Spur 601, a $367 million highway project that will ease access to Fort Bliss and relieve congestion in east and northeast El Paso. The financing approach, “pass through” financing, will repay a project developer to finance (through the Camino Real Regional Mobile Authority), design, acquire the right-of-way for, and construct the highway over several years. The regional authority will use state highway funds to repay the private developer, based on miles traveled by vehicles on the highway. El Paso city officials plan to develop new bus services near Fort Bliss and citywide as part of their plans to address the transportation effects of military and nonmilitary growth. However, Fort Bliss officials told us that demand for transit services is low among base personnel because the base encompasses a large geographic area, the base gates are not within walking distance of most buildings, and the base does not have a shuttle service to transport transit customers from the gates to their on-base destinations. Fort Bliss officials added that they attempted to establish an on-base bus service but discontinued it because of low demand for the service. Fort Knox officials expect the base to gain about 1,600 military and civilian personnel and dependents by September 2011; however, the military- related population living off-base will grow by about 5,000. A local metropolitan planning organization study of traffic conditions near Fort Knox concludes that without significant improvements, the existing roadway system will be incapable of providing the capacity required to accommodate traffic increases caused by the change in personnel at the base. Likewise, Kentucky state and local officials said they completed a roadway improvement project that they considered essential to addressing the transportation impact of expected BRAC organizational changes at Fort Knox, but they do not have sufficient funding to complete other projects designed to address that impact before the changes occur. State and local officials report that the transportation projects needed to address the impact of growth at Fort Knox will cost about $244 million. Shortly after state and local officials learned about the planned changes at Fort Knox, state officials prioritized the widening of a roadway that provides access to the base. According to a state official, the state completed the $13 million improvement project in March 2008. Since then, state officials have been able to set aside an additional $50 million in bond funds for the remaining projects. Local officials told us that state law leaves them with few other revenue-raising options for transportation improvements. For example, the Kentucky constitution prohibits the state General Assembly from granting city and county governments the authority to levy sales taxes, thus limiting their options to fund growth-related transportation improvements. Accordingly, local officials said the state government must fund most transportation improvements. The officials told us that the state must use most available funds for roadway maintenance and does not have sufficient funds remaining to address growth-related projects at Fort Knox before 2011. Local officials are working to increase park-and-ride services to reduce anticipated roadway congestion but do not have the financial capacity to purchase additional buses and expand service. Local officials consider expanding key roadway capacity a higher priority than expanding transit services. Local transit services are limited, and the transit provider does not have the capacity to significantly expand services and help address the transportation impact of adding about 5,000 people to the off-base population. The Transit Authority of Central Kentucky provides bus and vanpool services for the communities near Fort Knox. According to transit authority officials, their bus and vanpool system provides services for about 135 passengers each day. Despite their limited ability to address the effects of the expected growth at Fort Knox, authority officials plan to operate larger buses and provide increased service as demand for transit services increases. State officials do not expect to complete key projects until 2013 or 2014—2 to 3 years after the growth occurs. The projects include a bypass roadway to improve traffic conditions on a major roadway leading to the base and a new roadway serving residential areas where local officials expect most of the new personnel to reside. Eglin Air Force Base, located in Okaloosa, Walton, and Santa Rosa counties, will gain about 3,600 military and civilian personnel and 5,900 dependents by September 2011. State, local, and Air Force officials expect congestion on major roadways to worsen with this growth. As noted, a limited roadway network serving the 724 square-mile facility channels traffic along relatively few major roads and causes congestion. Like officials in Maryland and Virginia, Florida state and local officials are prioritizing transportation projects and initially funding projects that they can complete before planned BRAC growth at Eglin Air Force Base occurs. Local and state officials have not estimated the total costs needed to address the impact of growth, but they have identified short- and long- term projects they consider critical to addressing the impact. State and county officials are initially funding some projects that address immediate needs of the communities that will be affected by the growth. These projects are considered critical to accommodating increased traffic levels and maintaining access to the base without unreasonable delays, including widening major roads near the base from four to six lanes. Another critical but currently unfunded project is construction of an overpass to allow personnel to access a nearby airfield without stopping traffic on a state highway. Florida state and local officials told us that they do not have the funding necessary to complete planned long-term projects. They added that long- term projects include improving and constructing roadways in and near several communities that will be affected by the growth and expanding transit services. Expanding transit services could be important to accommodate growth-related traffic increases because environmental concerns preclude widening several key roadway segments near the installation. We provided copies of this report to the Departments of Defense and Transportation for their review and comment. Both provided technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to other interested congressional committees and the Secretaries of Defense, Transportation, the Army, the Air Force, and the Navy and the Commandant of the Marine Corps. Copies are available to others at no cost on GAO’s Web Site at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834, or herrp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. To determine the expected impact of military growth on transportation in communities affected by the 2005 Base Realignment and Closure (BRAC) decisions, we reviewed the 18 military bases identified by the Office of Economic Adjustment (OEA) that will be substantially and seriously affected by growth resulting from the BRAC 2005 realignments. We analyzed relevant OEA reports, including reports that identified projects designed to address the impact of growth. We reviewed environmental impact statements and assessments for the 16 of these bases that had completed environmental documents at the time of our review. To obtain more detailed information on how community transportation likely would be affected, we selected 8 of the 18 bases, and their nearby communities, to visit. We selected these locations based on several of factors. We classified bases into three groups, including very large metropolitan areas of over 1 million people, smaller metropolitan areas of 200,000 to 1 million people, and smaller urban areas of under 200,000 people, and selected communities within each grouping, considering whether the environmental study was complete, and whether community officials identified transportation as a concern. The bases selected are listed in table 1 of this report. We interviewed Army, Navy, and Air Force officials responsible for implementing the BRAC decisions about the expected growth at these installations and the impact of the growth on transportation in the communities. For the eight communities, we analyzed state and community participation in the environmental review processes, and relevant studies to determine the transportation effects of growth, including state transportation improvement plans, local transportation plans, and detailed traffic studies, where available. We did not independently assess the transportation models used in these traffic studies, or independently calculate employment or population growth in the communities. In addition, we interviewed state and local transportation and other local officials responsible for addressing the impact of military growth about how that growth would affect transportation in these communities. We also observed conditions on roadways local officials expect to be affected by BRAC growth in the selected communities. To determine the estimated costs to address the transportation impact of military growth and the status of their efforts to fund growth-related projects, we analyzed information OEA collected from affected local governments showing their cost estimates and funding available for growth-related projects. We interviewed OEA project managers responsible for coordinating data gathering from affected local governments and local government officials about the effort and the process and standards for including projects as part of OEA’s assessment. We also analyzed the data to determine the total costs of both the critical short-term projects and the longer-term projects. We also compared projects included in the data with projects identified in the environmental studies DOD conducted for the growth locations to establish a link between the proposed projects and military growth actions. To determine the federal, state, and local response to the expected impact of BRAC growth on transportation, we reviewed DOD’s Defense Access Roads (DAR) program guidance and interviewed base and DOD Military Surface Deployment and Distribution Command officials to determine which BRAC growth-related projects base commanders had submitted for program funding and the amount of program funding committed. We also interviewed OEA officials on the role OEA provides in supporting BRAC- affected communities. In addition, to obtain information on how military resources would help address the impact of growth on transportation, we interviewed Army, Navy, and Air Force officials responsible for implementing individual bases’ efforts to help state and local governments address that impact. We interviewed Federal Highway Administration and Federal Transit Administration officials about their agencies’ roles in helping affected communities address the impact of military growth on transportation and about the funding available to affected communities to address that impact. We reviewed local and state short- and long-term transportation improvement plans for the selected communities to identify transportation projects planned to address BRAC growth, communities’ prioritization of these projects, and communities’ strategies for funding and completing the projects. We also interviewed state and local officials at the eight selected communities about their strategies for addressing that impact, including how they would prioritize BRAC-related projects with other transportation projects, obtain needed funding, and coordinate with DOD and other federal officials, and their views on the environmental impact process. We conducted this performance audit from April 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Robert Ciszewski, Catherine Colwell, Steve Cohen, Elizabeth Eisenstadt, Brian Lepore, Les Locke, Mike Mgebroff, and Stephanie Purcell made key contributions to this report. Military High Level Leadership Needed to Help Guam Address Challenges Caused by DOD-Related Growth. GAO-09-500R. Washington, D.C.: April 9, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Surface Transportation: Restructured Federal Approach Needed for More Focused, Performance-Based, and Sustainable Programs. GAO-08-400. Washington, D.C.: March 6, 2008. Defense Infrastructure: Army and Marine Corps Grow the Force Construction Projects Generally Support the Initiative. GAO-08-375. Washington, D.C.: March 6, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Are Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. | As part of the 2005 Base Realignment and Closure (BRAC) round, the Department of Defense (DOD) plans to relocate over 123,000 military and DOD civilian personnel, thereby increasing the staffing at 18 bases nationwide. In addition, DOD and local officials expect thousands of dependents and DOD contractor employees to relocate to communities near the BRAC 2005 growth bases. These actions will greatly increase traffic in the surrounding communities. BRAC recommendations must be implemented by September 2011. The House and Senate Committees on Appropriations directed GAO to assess and report on the impact of BRAC-related growth on transportation systems and on the responses of federal, state, and local governments. Accordingly, GAO determined the (1) expected impact on transportation in communities affected by BRAC decisions, and (2) federal, state, and local response to the expected impacts. To perform its work, GAO obtained information from the 18 communities with expected substantial BRAC growth; visited 8 of these communities; interviewed federal civilian and military officials and state and local officials; and reviewed DOD data, transportation plans, and environmental studies. GAO provided copies of this report to the Departments of Defense and Transportation for their review. The Departments provided technical comments, which GAO incorporated as appropriate. Growth resulting from BRAC decisions will have a significant impact on transportation systems in some communities, but estimates of the total cost to address those impacts are uncertain. In addition to BRAC, other defense initiatives will result in growth in communities and also add to transportation needs. BRAC growth will result in increased traffic in communities ranging from very large metropolitan areas to small communities, creating or worsening congested roads at specific locations. Traffic impacts can also affect larger relocation decisions, and were important in DOD's decision to acquire an additional site for Fort Belvoir, Virginia, an acquisition that DOD estimates will cost $1.2 billion. According to a DOD Office of Economic Adjustment (OEA) survey, 17 of 18 BRAC growth communities identified transportation as one of their top challenges. Near-term transportation projects to address these challenges could cost about $2.0 billion, of which about $1.1 billion is related to projects in the metropolitan Washington, D.C., area. BRAC-related transportation infrastructure costs are subject to a number of uncertainties. For example, not all potential projects are included in the estimate, military staffing levels at some growth installations are in flux and the location decisions of military and civilian personnel have not yet been made, and pre-existing, non-military community growth makes a direct link between transportation projects to military growth difficult. The federal government has provided limited direct assistance to help communities address BRAC transportation impacts, and state and local governments have adopted strategies to expedite projects within the time frame allowed by BRAC. For example, DOD's Defense Access Roads Program has certified transportation projects for funding at three affected communities. Also, OEA has provided planning grants and funded traffic studies and local planning positions. While federal highway and transit programs can be used for many BRAC-related transportation needs, dedicated funds are not available. Instead, BRAC-related transportation projects must compete with other proposed transportation projects. Communities had identified funding for about $500 million of the estimated $2.0 billion needed to address their near term project needs. Some state and local governments have adopted strategies to expedite highway projects, such as prioritizing short-term high-impact projects, because the time frames for completing BRAC personnel moves are much shorter than the time frames for such projects. While legislation mandates that BRAC growth be completed by 2011, major highway and transit projects usually take 9 to 19 years. To complete some critical projects before BRAC growth occurs, state and local officials are reprioritizing planned projects and implementing those that can be completed quickly. For example, Maryland prioritized certain lower-cost intersection projects that will improve traffic flow. In Texas, officials used an innovative financing approach to generate funding quickly for a major highway project at Fort Bliss. |
The Aviation and Transportation Security Act (ATSA), enacted in November 2001, created TSA and gave it responsibility for securing all modes of transportation. As part of this responsibility, TSA oversees security operations at the nation’s more than 400 commercial airports, including establishing requirements for passenger and checked baggage screening and ensuring the security of air cargo transported to, from, and within the United States. TSA has operational responsibility for conducting passenger and checked baggage screening at most airports, and has regulatory, or oversight, responsibility, for air carriers who conduct air cargo screening. While TSA took over responsibility for passenger checkpoint and baggage screening, air carriers have continued to conduct passenger watch-list matching in accordance with TSA requirements, which includes the process of matching passenger information against the No Fly List and Selectee lists before flights depart. TSA is currently developing a program, known as Secure Flight, to take over this responsibility from air carriers for passengers on domestic flights, and plans to assume from the U.S. Customs and Border Protection (CBP) this pre-departure name-matching function for passengers on international flights traveling to or from the United States. Prior to ATSA, passenger and checked baggage screening had been performed by private screening companies under contract to airlines. ATSA established TSA and required it to create a federal workforce to assume the job of conducting passenger and checked baggage screening at commercial airports. The federal screener workforce was put into place, as required, by November 2002. Passenger screening systems are composed of three elements: the people (TSOs) responsible for conducting the screening of airline passengers and their carry-on items, the technology used during the screening process, and the procedures TSOs are to follow to conduct screening. Collectively, these elements help to determine the effectiveness and efficiency of passenger screening operations. TSA’s responsibilities for securing air cargo include, among other things, establishing security rules and regulations governing domestic and foreign passenger air carriers that transport cargo, domestic and foreign all-cargo carriers that transport cargo, and domestic freight forwarders. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and freight forwarders through compliance inspections, and, in coordination with DHS’s Science and Technology (S&T) Directorate, for conducting research and development of air cargo security technologies. Air carriers (passenger and all-cargo) are responsible for implementing TSA security requirements, predominantly through TSA-approved security programs that describe the security policies, procedures, and systems the air carrier will implement and maintain to comply with TSA security requirements. Air carriers must also abide by security requirements issued by TSA through security directives or emergency amendments to air carrier security programs. Air carriers use several methods and technologies to screen domestic and inbound air cargo. These include manual physical searches and comparisons between airway bills and cargo contents to ensure that the contents of the cargo shipment matches the cargo identified in documents filed by the shipper, as well as using approved technology, such as X-ray systems, explosives trace detection systems, decompression chambers, explosive detection systems, and certified explosive detection canine teams. Under TSA’s security requirements for domestic, outbound and inbound air cargo, passenger air carriers are currently required to randomly screen a specific percentage of nonexempt air cargo pieces listed on each airway bill. TSA’s air cargo security requirements currently allow passenger air carriers to exempt certain types of cargo from physical screening. For such cargo, TSA has authorized the use of TSA-approved alternative methods for screening, which can consist of verifying shipper information and conducting a visual inspection of the cargo shipment. TSA requires all-cargo carriers to screen 100 percent of air cargo that exceeds a specific weight threshold. As of October 2006, domestic freight forwarders are also required, under certain conditions, to screen a certain percentage of air cargo prior to its consolidation. TSA, however, does not regulate foreign freight forwarders, or individuals or businesses that have their cargo shipped by air to the United States. Under the Implementing Recommendations of the 9/11 Commission Act of 2007, DHS is required to implement a system to screen 50 percent of air cargo transported on passenger aircraft by February 2009, and 100 percent of such cargo by August 2010. The prescreening of airline passengers who may pose a security risk before they board an aircraft is one of many layers of security intended to strengthen commercial aviation. To further enhance commercial aviation security and in accordance with the Intelligence Reform and Terrorism Prevention Act of 2004, TSA is developing the Secure Flight program to assume from air carriers the function of matching passenger information against government-supplied terrorist watch-lists for domestic flights. TSA expects to assume from air carriers the watch-list matching for domestic flights beginning in January 2009 and to assume this watch-list matching function from CBP for flights departing from and to the United States by fiscal year 2010. TSA has taken steps to strengthen the three key elements of the screening system—people (TSOs and private screeners), screening procedures, and technology—but has faced management, planning and funding challenges. For example, TSA has implemented several efforts intended to strengthen the allocation of its TSO workforce. We reported in February 2004 that staffing shortages and TSA’s hiring process had hindered the ability of some Federal Security Directors (FSD)—the ranking TSA authorities responsible for leading and coordinating security activities at airports—to provide sufficient resources to staff screening checkpoints and oversee screening operations at their checkpoints without using additional measures such as overtime. Since that time, TSA has developed a Staffing Allocation Model to determine TSO staffing levels at airports. FSDs we interviewed during 2006 as part of our review of TSA’s staffing model generally reported that the model is a more accurate predictor of staffing needs than TSA’s prior staffing model. However, FSDs expressed concerns about assumptions used in the fiscal year 2006 model related to the use of part-time TSOs, TSO training requirements, and TSOs’ operational support duties. To help ensure that TSOs are effectively utilized, we recommended that TSA establish a policy for when TSOs can be used to provide operational support. Consistent with our recommendation, in March 2007, TSA issued a management directive that provides guidance on assigning TSOs, through detail or permanent promotion, to duties of another position for a specified period of time. We also recommended that TSA establish a formal, documented plan for reviewing all of the model assumptions on a periodic basis to ensure that the assumptions result in TSO staffing allocations that accurately reflect operating conditions that may change over time. TSA agreed with our recommendation and, in December 2007, developed a Staffing Allocation Model Rates and Assumptions Validation Plan. The plan identifies the process TSA plans to use to review and validate the model’s assumptions on a periodic basis. Although we did not independently review TSA’s staffing allocation for fiscal year 2008, TSA’s fiscal year 2009 budget justification identified that the agency has achieved operational and efficiency gains that enabled them to implement or expand several workforce initiatives involving TSOs. For example, TSA implemented the travel document checker program at over 259 of the approximately 450 airports nationwide during fiscal year 2007. This program is intended to ensure that only passengers with authentic travel documents access the sterile areas of airports and board aircraft. TSA also deployed 643 behavior detection officers to 42 airports during fiscal year 2007. These officers screen passengers by observation techniques to identify potentially high-risk passengers based on involuntary physical and physiological reactions. In addition to TSA’s efforts to strengthen the allocation of its TSO workforce, TSA has taken steps to strengthen passenger checkpoint screening procedures to enhance the detection of prohibited items. However, we have identified areas where TSA could improve its evaluation and documentation of proposed procedures. In April 2007, we reported that TSA officials considered modifications to its standard operating procedures (SOP) based on risk information (threat and vulnerability information), daily experiences of staff working at airports, and complaints and concerns raised by the traveling public. We further reported that for more significant SOP modifications, TSA first tested the proposed modifications at selected airports to help determine whether the changes would achieve their intended purpose, as well as to assess its impact on screening operations. However, we reported that TSA’s data collection and analyses could be improved to help TSA determine whether proposed procedures that are operationally tested would achieve their intended purpose. We also found that TSA’s documentation on proposed modifications to screening procedures was not complete. We recommended that TSA develop sound evaluation methods, when possible, to assess whether proposed screening changes would achieve their intended purpose and generate and maintain documentation on proposed screening changes that are deemed significant. DHS generally agreed with our recommendations and TSA has taken some steps to implement them. For example, for several proposed SOP changes considered during the fall of 2007, TSA provided documentation that identified the sources of the proposed changes and the reasons why the agency decided to accept or reject the proposed changes. With respect to technologies, we reported in February 2007 that S&T and TSA were exploring new passenger checkpoint screening technologies to enhance the detection of explosives and other threats. Of the various emerging checkpoint screening projects funded by TSA and S&T, the explosive trace portal, the bottled liquids scanning device, and Advanced Technology Systems have been deployed to airport checkpoints. A number of additional projects have initiated procurements or are being researched and developed. For example, TSA has procured 34 scanners for screening passenger casts and prosthetic devices to be deployed in July 2008. In addition, TSA has procured 20 checkpoint explosive detection systems and plans to deploy these in August 2008. Further, TSA plans to finish its testing of whole body imagers during fiscal year 2009 and begin deploying 150 of these units by fiscal year 2010. Despite TSA’s efforts to develop passenger checkpoint screening technologies, we reported that limited progress has been made in fielding explosives detection technology at airport checkpoints in part due to challenges S&T and TSA faced in coordinating research and development efforts. For example, we reported that TSA had anticipated that the explosives trace portals would be in operation throughout the country during fiscal year 2007. However, due to performance and maintenance issues, TSA halted the acquisition and deployment of the portals in June 2006. As a result, TSA has fielded less than 25 percent of the 434 portals it projected it would deploy by fiscal year 2007. In addition to the portals, TSA has fallen behind in its projected acquisition of other emerging screening technologies. For example, we reported that the acquisition of 91 whole body imagers was previously delayed in part because TSA needed to develop a means to protect the privacy of passengers screened by this technology. While TSA and DHS have taken steps to coordinate the research, development and deployment of checkpoint technologies, we reported in February 2007 that challenges remained. For example, TSA and S&T officials stated that they encountered difficulties in coordinating research and development efforts due to reorganizations within TSA and S&T. Since our February 2007 testimony, according to TSA and S&T, coordination between them has improved. We also reported that TSA did not have a strategic plan to guide its efforts to acquire and deploy screening technologies, and that a lack of a strategic plan or approach could limit TSA’s ability to deploy emerging technologies at those airport locations deemed at highest risk. TSA officials stated that they plan to submit the strategic plan for checkpoint technologies mandated by Division E of the Consolidated Appropriations Act, 2008, during the summer of 2008. We will continue to evaluate S&T’s and TSA’s efforts to research, develop and deploy checkpoint screening technologies as part of our ongoing review. TSA has taken steps to enhance domestic and inbound air cargo security, but more work remains to strengthen this area of aviation security. For example, TSA has issued an Air Cargo Strategic Plan that focused on securing the domestic air cargo supply chain. However, in April 2007, we reported that this plan did not include goals and objectives for addressing the security of inbound air cargo, or cargo transported into the United States from a foreign location, which presents different security challenges than cargo transported domestically. We also reported that TSA had not conducted vulnerability assessments to identify the range of security weaknesses that could be exploited by terrorists related to air cargo operations. We further reported that TSA had established requirements for air carriers to randomly screen air cargo, but had exempted some domestic and inbound cargo from screening. With respect to inbound air cargo, we reported that TSA lacked an inspection plan with performance goals and measures for its inspection efforts, and recommended that TSA develop such a plan. TSA is also taking steps to compile and analyze information on air cargo security practices used abroad to identify those that may strengthen DHS’s overall air cargo security program, as we recommended. According to TSA officials, the agency’s proposed Certified Cargo Screening Program (CCSP) is based on their review of foreign countries’ models for screening air cargo. TSA officials believe this program will assist the agency in meeting the requirement to screen 100 percent of cargo transported on passenger aircraft by August 2010, as mandated by the Implementing Recommendations of the 9/11 Commission Act of 2007. Through TSA’s proposed CCSP, the agency plans on allowing the screening of air cargo to take place at various points throughout the air cargo supply chain. Under the CCSP, Certified Cargo Screening Facilities (CCSF), such as shippers, manufacturing facilities and freight forwarders that meet security requirements established by TSA, will volunteer to screen cargo prior to its loading onto an aircraft. Due to the voluntary nature of this program, participation of the air cargo industry is critical to the successful implementation of the CCSP. According to TSA officials, air carriers will ultimately be responsible for screening 100 percent of cargo transported on passenger aircraft should air cargo industry entities not volunteer to become a CCSF. In July 2008, however, we reported that TSA may face challenges as it proceeds with its plans to implement a system to screen 100 percent of cargo transported on passenger aircraft by August 2010. Specifically, we reported that DHS has not yet completed its assessments of the technologies TSA plans to approve for use as part of the CCSP for screening and securing cargo. We also reported that although TSA has taken steps to eliminate the majority of exempted domestic and outbound cargo that it has not required to be screened, the agency currently plans to continue to exempt some types of domestic and outbound cargo from screening after August 2010. Moreover, we found that TSA has begun analyzing the results of air cargo compliance inspections and has hired additional compliance inspectors dedicated to air cargo. However, according to agency officials, TSA will need additional air cargo inspectors to oversee the efforts of the potentially thousands of entities that may participate in the CCSP once it is fully implemented. Finally, we reported that more work remains for TSA to strengthen the security of inbound cargo. Specifically, the agency has not yet finalized its strategy for securing inbound cargo or determined how, if at all, inbound cargo will be screened as part of its proposed CCSP. Over the past several years, TSA has faced a number of challenges in developing and implementing an advanced prescreening system, known as Secure Flight, which will allow TSA to assume responsibility from air carriers for comparing domestic passenger information against the No Fly and Selectee lists. We reported in February 2008 that TSA had made substantial progress in instilling more discipline and rigor in developing and implementing Secure Flight, but that challenges remain that may hinder the program’s progress moving forward. For example, TSA had taken numerous steps to address previous GAO recommendations related to strengthening Secure Flight’s development and implementation, as well as additional steps designed to strengthen the program. Among other things, TSA developed a detailed, conceptual description of how the system is to operate, commonly referred to as a concept of operations; established a cost and schedule baseline; developed security requirements; developed test plans; conducted outreach with key stakeholders; published a notice of proposed rulemaking on how Secure Flight is to operate; worked with CBP to integrate the domestic watch list matching function with the international watch list matching function currently operated by CBP; and issued a guide to key stakeholders (e.g., air carriers and CBP) that defines, among other things, system data requirements. Collectively, these efforts have enabled TSA to more effectively manage the program’s development and implementation. However, challenges remain that may hinder the program’s progress moving forward. In February 2008, we reported that TSA had not (1) developed program cost and schedule estimates consistent with best practices; (2) fully implemented its risk management plan; (3) planned for system end-to-end testing in test plans; and (4) ensured that information- security requirements are fully implemented. To address these challenges, we made several recommendations to DHS and TSA to incorporate best practices in Secure Flight’s cost and schedule estimates and to fully implement the program’s risk-management, testing, and information- security requirements. DHS and TSA officials generally agreed with these recommendations. We will continue to evaluate TSA’s efforts to develop and implement Secure Flight as part of our ongoing review. Our work has identified homeland security challenges that cut across DHS’s and TSA’s mission and core management functions. These issues have impeded the department’s and TSA’s progress since its inception and will continue to confront the department as it moves forward. For example, DHS and TSA have not always implemented effective strategic planning efforts and have not yet fully developed performance measures or put into place structures to help ensure that they are managing for results. For example, with regard to TSA’s efforts to secure air cargo, we reported in October 2005 and April 2007 that TSA completed an Air Cargo Strategic Plan that outlined a threat-based risk-management approach to securing the nation’s domestic air cargo system. However, TSA had not developed a similar strategy for addressing the security of inbound air cargo, including how best to partner with CBP and international air cargo stakeholders. In addition, although DHS and TSA have made risk-based decision making a cornerstone of departmental and agency policy, TSA could strengthen its application of risk management in implementing its mission functions. For example, TSA incorporated risk-based decision making when making modifications to airport checkpoint screening procedures, to include modifying procedures based on intelligence information and vulnerabilities identified through covert testing at airport checkpoints. However, in April 2007, we reported that TSA’s analyses that supported screening procedural changes could be strengthened. For example, TSA officials based their decision to revise the prohibited items list to allow passengers to carry small scissors and tools onto aircraft based on their review of threat information—which indicated that these items do not pose a high risk to the aviation system—so that TSOs could concentrate on higher threat items. However, TSA officials did not conduct the analysis necessary to help them determine whether this screening change would affect TSO’s ability to focus on higher-risk threats. We also reported that, although improvements are being made, homeland security roles and responsibilities within and between the levels of government, and with the private sector, are evolving and need to be clarified. For example, we reported that opportunities exist for TSA to work with foreign governments and industry to identify best practices for securing air cargo, and recommended that TSA systematically compile and analyze information on practices used abroad to identify those that may strengthen the department’s overall security efforts. TSA has subsequently reviewed the models used in two foreign countries that rely on government-certified screeners to screen air cargo to facilitate the design of the agency’s proposed CCSP. Regarding efforts to respond to in- flight security threats, which, depending on the nature of the threat, could involve more than 15 federal agencies and agency components, in July 2007, we recommended that DHS and other departments document and share their respective coordination and communication strategies and response procedures, to which DHS agreed. Mr. Chairman this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512-3404 or berrickc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Chris Currie; Joe Dewechter; Vanessa DeVeau; Thomas Lombardi; Steve Morris, Assistant Director; Meg Ullengren; and Margaret Vo made contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since its inception in November 2001, the Transportation Security Administration (TSA) has focused much of its efforts on aviation security, and has developed and implemented a variety of programs and procedures to secure the commercial aviation system. TSA funding for aviation security has totaled about $26 billion since fiscal year 2004. This testimony focuses on TSA's efforts to secure the commercial aviation system through passenger screening, strengthening air cargo security, and watch-list matching programs, as well as challenges that remain. It also addresses crosscutting issues that have impeded TSA's efforts in strengthening security. This testimony is based on GAO reports and testimonies issued from February 2004 through July 2008 including selected updates obtained from TSA officials in June and July 2008. DHS and TSA have undertaken numerous initiatives to strengthen the security of the nation's commercial aviation system, including actions to address many recommendations made by GAO. TSA has focused its efforts on, among other things, more efficiently allocating, deploying, and managing the Transportation Security Officer (TSO) workforce--formerly known as screeners; strengthening screening procedures; developing and deploying more effective and efficient screening technologies; strengthening domestic air cargo security; and developing a government operated watch-list matching program, known as Secure Flight. For example, in response to GAO's recommendation, TSA developed a plan to periodically review assumptions in its Staffing Allocation Model used to determine TSO staffing levels at airports, and took steps to strengthen its evaluation of proposed procedural changes. TSA also explored new passenger checkpoint screening technologies to better detect explosives and other threats, and has taken steps to strengthen air cargo security, including increasing compliance inspections of air carriers. Finally, TSA has instilled more discipline and rigor into Secure Flight's systems development, including preparing key documentation and strengthening privacy protections. While these efforts should be commended, GAO has identified several areas that should be addressed to further strengthen security. For example, TSA made limited progress in developing and deploying checkpoint technologies due to planning and management challenges. In addition, TSA faces resource and other challenges in developing a system to screen 100 percent of cargo transported on passenger aircraft in accordance with the Implementing Recommendations of the 9/11 Commission Act of 2007. GAO further identified that TSA faced program management challenges in the development and implementation of Secure Flight, including developing cost and schedule estimates consistent with best practices; fully implementing the program's risk management plan; developing a comprehensive testing strategy; and ensuring that information security requirements are fully implemented. A variety of crosscutting issues have affected DHS's and TSA's efforts in implementing its mission and management functions. For example, TSA can more fully adopt and apply a risk-management approach in implementing its security mission and core management functions, and strengthen coordination activities with key stakeholders. For example, while TSA incorporated risk-based decision making when modifying checkpoint screening procedures, GAO reported that TSA's analyses that supported screening procedural changes could be further strengthened. DHS and TSA have strengthened their efforts in these areas, but more work remains. |
Since 2000, legacy airlines have faced unprecedented internal and external challenges. Internally, the impact of the Internet on how tickets are sold and consumers search for fares and the growth of low cost airlines as a market force accessible to almost every consumer has hurt legacy airline revenues by placing downward pressure on airfares. More recently, airlines’ costs have been hurt by rising fuel prices (see figure 1). This is especially true of airlines that did not have fuel hedging in place. Externally, a series of largely unforeseen events—among them the September 11th terrorist attacks in 2001 and associated security concerns; war in Iraq; the SARS crisis; economic recession beginning in 2001; and a steep decline in business travel—seriously disrupted the demand for air travel during 2001 and 2002. Low fares have constrained revenues for both legacy and low cost airlines. Yields, the amount of revenue airlines collect for every mile a passenger travels, fell for both low cost and legacy airlines from 2000 through 2004 (see figure 2). However, the decline has been greater for legacy airlines than for low cost airlines. During the first quarter of 2005, average yields among both legacy and low cost airlines rose somewhat, although those for legacy airlines still trailed what they were able to earn during the same period in 2004. Legacy airlines, as a group, have been unsuccessful in reducing their costs to become more competitive with low cost airlines. Unit cost competitiveness is key to profitability for airlines because of declining yields. While legacy airlines have been able to reduce their overall costs since 2001, these were largely achieved through capacity reductions and without an improvement in their unit costs. Meanwhile, low cost airlines have been able to maintain low unit costs, primarily by continuing to grow. As a result, low cost airlines have been able to sustain a unit cost advantage as compared to their legacy rivals (see figure 3). In 2004, low cost airlines maintained a 2.7 cent per available seat mile advantage over legacy airlines. This advantage is attributable to lower overall costs and greater labor and asset productivity. During the first quarter of 2005, both legacy and low cost airlines continued to struggle to reduce costs, in part because of the increase in fuel costs. Weak revenues and the inability to realize greater unit cost-savings have combined to produce unprecedented losses for legacy airlines. At the same time, low cost airlines have been able to continue producing modest profits as a result of lower unit costs (see figure 4). Legacy airlines have lost a cumulative $28 billion since 2001 and are predicted to lose another $5 billion in 2005, according to industry analysts. First quarter 2005 operating losses (based on data reported to DOT) approached $1.45 billion for legacy airlines. Low cost airlines also reported net operating losses of almost $0.2 billion, driven primarily by ATA’s losses. Since 2000, as the financial condition of legacy airlines deteriorated, they built cash balances not through operations but by borrowing. Legacy airlines have lost cash from operations and compensated for operating losses by taking on additional debt, relying on creditors for more of their capital needs than in the past. In the process of doing so, several legacy airlines have used all, or nearly all, of their assets as collateral, potentially limiting their future access to capital markets. In sum, airlines are capital and labor intensive firms subject to highly cyclical demand and intense competition. Aircraft are very expensive and require large amounts of debt financing to acquire, resulting in high fixed costs for the industry. Labor is largely unionized and highly specialized, making it expensive and hard to reduce during downturns. Competition in the industry is frequently intense owing to periods of excess capacity, relatively open entry, and the willingness of lenders to provide financing. Finally, demand for air travel is highly cyclical, closely tied to the business cycle. Over the past decade, these structural problems have been exacerbated by the growth in low cost airlines and increasing consumer sensitivity to differences in airfares based on their use of the Internet to purchase tickets. More recently airlines have had to deal with persistently high fuel prices—operating profitability, excluding fuel costs, is as high as it has ever been for the industry. Airlines seek bankruptcy protection for such reasons as severe liquidity pressures, an inability to obtain relief from employees and creditors, and an inability to obtain new financing, according to airline officials and bankruptcy experts. As a result of the structural problems and external shocks previously discussed, there have been 160 total airline bankruptcy filings since deregulation in 1978, including 20 since 2000, according to the Air Transport Association. Some airlines have failed more than once but most filings were by smaller carriers. However, the size of airlines that have been declaring bankruptcy has been increasing. Of the 20 bankruptcy filings since 2000, half of these have been for airlines with more than $100 million in assets, about the same number of filings as in the previous 22 years. Compared to the average failure rate for all types of businesses, airlines have failed more often than other businesses. As figure 5 shows, in some years, airline failures were several times more common than for businesses overall. With very few exceptions, airlines that enter bankruptcy do not emerge from it. Of the 146 airline Chapter 11 reorganization filings since 1979, in only 16 cases are the airlines still in business. Many of the advantages of bankruptcy stem from legal protection afforded the debtor airline from its creditors, but this protection comes at a high cost in loss of control over airline operations and damaged relations with employees, investors, and suppliers, according to airline officials and bankruptcy experts. Contrary to some assertions that bankruptcy protection has led to overcapacity and under pricing that have harmed healthy airlines, we found no evidence that this has occurred either in individual markets or to the industry overall. Such claims have been made for more than a decade. In 1993, for example, a national commission to study airline industry problems cited bankruptcy protection as a cause for the industry’s overcapacity and weakened revenues. More recently, airline executives have cited bankruptcy protection as a reason for industry over capacity and low fares. However, we found no evidence that this had occurred and some evidence to the contrary. First, as illustrated by Figure 6, airline liquidations do not appear to affect the continued growth in total industry capacity. If bankruptcy protection leads to overcapacity as some contend, then liquidation should take capacity out of the market. However, the historical growth of airline industry capacity (as measured by available seat miles, or ASMs) has continued unaffected by major liquidations. Only recessions, which curtail demand for air travel, and the September 11th attack, appear to have caused the airline industry to trim capacity. This trend indicates that other airlines quickly replenish capacity to meet demand. In part, this can be attributed to the fungibility of aircraft and the availability of capital to finance airlines. Similarly, our research does not indicate that the departure or liquidation of a carrier from an individual market necessarily leads to a permanent decline in traffic for that market. We contracted with Intervistas/GA2, an aviation consultant, to examine the cases of six hub cities that experienced the departure or significant withdrawal of service of an airline over the last decade (see table 1). In four of the cases, both local origin-and-destination (i.e., passenger traffic to or from, but not connecting through, the local hub) and total passenger traffic (i.e., local and connecting) increased or changed little because the other airlines expanded their traffic in response. In all but one case, fares either decreased or rose less than 6 percent. We also reviewed numerous other bankruptcy and airline industry studies and spoke to industry analysts to determine what evidence existed with regard to the impact of bankruptcy on the industry. We found two major academic studies that provided empirical data on this issue. Both studies found that airlines under bankruptcy protection did not lower their fares or hurt competitor airlines, as some have contended. A 1995 study found that an airline typically reduced its fares somewhat before entering bankruptcy. However, the study found that other airlines did not lower their fares in response and, more importantly, did not lose passenger traffic to their bankrupt rival and therefore were not harmed by the bankrupt airline. Another study came to a similar conclusion in 2000, this time examining the operating performance of 51 bankrupt firms, including 5 airlines, and their competitors. Rather than examine fares as did the 1995 study, this study examined the operating performance of bankrupt firms and their rivals. This study found that bankrupt firms’ performance deteriorated prior to filing for bankruptcy and that their rivals’ profits also declined during this period. However, once a firm entered bankruptcy, its rivals’ profits recovered. Under current law, legacy airlines’ pension funding requirements are estimated to be a minimum of $10.4 billion from 2005 through 2008. These estimates assume the expiration of the Pension Funding Equity Act (PFEA) at the end of this year. The PFEA permitted airlines to defer the majority of their deficit reduction contributions in 2004 and 2005; if this legislation is allowed to expire it would mean that payments due from legacy airlines will significantly increase in 2006. According to PBGC data, legacy airlines are estimated to owe a minimum of $1.5 billion this year, rising to nearly $2.9billion in 2006, $3.5 billion in 2007, and $2.6 billion in 2008. In contrast, low cost airlines have eschewed defined benefit pension plans and instead use defined contribution (401k-type) plans. However, pension funding obligations are only part of the sizeable amount of debt that carriers face over the near term. The size of legacy airlines’ future fixed obligations, including pensions, relative to their financial position suggests they will have trouble meeting their various financial obligations. Fixed airline obligations (including pensions, long term debt, and capital and operating leases) in each year from 2005 through 2008 are substantial. Legacy airlines carried cash balances of just under $10 billion going into 2005 (see figure 7) and have used cash to fund their operational losses. These airlines fixed obligations are estimated to be over $15 billion in both 2005 and 2006, over $17 billion in 2007, and about $13 billion in 2008. While cash from operations can help fund some of these obligations, continued losses and the size of these obligations put these airlines in a sizable liquidity bind. Fixed obligations in 2008 and beyond will likely increase as payments due in 2006 and 2007 may be pushed out and new obligations are assumed. The enormity of legacy airlines’ future pension funding requirements is attributable to the size of the pension shortfall that has developed since 2000. As recently as 1999, airline pensions were overfunded by $700 million based on Security and Exchange Commission (SEC) filings; by the end of 2004 legacy airlines reported a deficit of $21 billion (see figure 8), despite the termination of the US Airways pilots plan in 2003. Since these filings, the total underfunding has declined to approximately $13.7 billion, due in part to the termination of the United Airline plans and the remaining US Airways plans. The extent of underfunding varies significantly by airline. At the end of 2004, prior to terminating its pension plans, United reported underfunding of $6.4 billion, which represented over 40 percent of United’s total operating revenues in 2004. In contrast, Alaska reported pension underfunding of $303 million at the end of 2004, or 13.5 percent of its operating revenues. Since United terminated its pensions, Delta and Northwest now appear to have the most significant pension funding deficits—over $5 billion and nearly $4 billion respectively—which represent about 35 percent of 2004 operating revenues at each airline. The growth of pension underfunding is attributable to 3 factors. Assets losses and low interest rates. Airline pension asset values dropped nearly 20 percent from 2001 through 2004 along with the decline in the stock market, while future obligations have steadily increased due to declines in the interest rates used to calculate the liabilities of plans. Management and labor union decisions. Pension plans have been funded far less than they could have on a tax deductible basis. PBGC examined 101 cases of airline pension contributions from 1997 through 2002, and found that while the maximum deductible contribution was made in 10 cases, no cash contributions were made in 49 cases where they could have contributed. When airlines did make tax deductible contributions, it was often far less than the maximum permitted. For example, the airlines examined could have contributed a total of $4.2 billion on a tax deductible basis in 2000 alone, but only contributed about $136 million despite recording profits of $4.1 billion (see figure 9). In addition, management and labor have sometimes agreed to salary and benefit increases beyond what could reasonably be afforded. For example, in the spring of 2002, United’s management and mechanics reached a new labor agreement that increased the mechanics’ pension benefit by 45 percent, but the airline declared bankruptcy the following December. Pension funding rules are flawed. Existing laws and regulations governing pension funding and premiums have also contributed to the underfunding of defined benefit pension plans. As a result, financially weak plan sponsors, acting within the law, have not only been able to avoid contributions to their plans, but also increase plan liabilities that are at least partially insured by PBGC. Under current law, reported measures of plan funding have likely overstated the funding levels of pension plans, thereby reducing minimum contribution thresholds for plan sponsors. And when plan sponsors were required to make additional contributions, they often substituted “account credits” for cash contributions, even as the market value of plan assets may have been in decline. Furthermore, the funding rule mechanisms that were designed to improve the condition of poorly funded plans were ineffective. Other lawful plan provisions and amendments, such as lump sum distributions and unfunded benefit increases may also have contributed to deterioration in the funding of certain plans. Finally, the premium structure in PBGC’s single-employer pension insurance program does not encourage better plan funding. The cost to PBGC and participants of defined benefit pension terminations has grown in recent years as the level of pension underfunding has deepened. When Eastern Airlines defaulted on its pension obligations of nearly $1.7 billion in 1991, for example, claims against the insurance program totaled $530 million in underfunded pensions and participants lost $112 million. By comparison, the US Airways and United pension terminations cost PBGC $9.6 billion in combined claims against the insurance program and reduced participants’ benefits by $5.2 billion (see table 2). In recent pension terminations, because of statutory limits active and high salaried employees generally lost more of their promised benefits compared to retirees and low salaried employees. For example, PBGC generally does not guarantee benefits above a certain amount, currently $45,614 annually per participant at age 65. For participants who retire before 65 the benefits guaranteed are even less; participants that retire at age 60 are currently limited to $29,649. Commercial pilots often end up with substantial benefit cuts when their plans are terminated because they generally have high benefit amounts and are also required by FAA to retire at age 60. Far fewer nonpilot retirees are affected by the maximum payout limits. For example, at US Airways fewer than 5 percent of retired mechanics and attendants faced benefit cuts as a result of the pension termination. Tables 3 and 4 summarize the expected cuts in benefits for different groups of United’s active and retired employees. It is important to emphasize that relieving legacy airlines of their defined benefit funding costs will help alleviate immediate liquidity pressures, but does not fix their underlying cost structure problems, which are much greater. Pension costs, while substantial, are only a small portion of legacy airlines’ overall costs. As noted previously in figure 3, the cost of legacy airlines’ defined benefit plans accounted for a 0.4 cent, or 15 percent difference between legacy and low cost airline unit costs. The remaining 85 percent of the unit cost differential between legacy and low cost carriers is attributable to factors other than defined benefits pension plans. Moreover, even if legacy airlines terminated their defined benefit plans it would not fully eliminate this portion of the unit cost differential because, according to labor officials we interviewed, other plans would replace them. While the airline industry was deregulated 27 years ago, the full effect on the airline industry’s structure is only now becoming evident. Dramatic changes in the level and nature of demand for air travel combined with an equally dramatic evolution in how airlines meet that demand have forced a drastic restructuring in the competitive structure of the industry. Excess capacity in the airline industry since 2000 has greatly diminished airlines’ pricing power. Profitability, therefore, depends on which airlines can most effectively compete on cost. This development has allowed inroads for low cost airlines and forced wrenching change upon legacy airlines that had long competed based on a high-cost business model. The historically high number of airline bankruptcies and liquidations is a reflection of the industry’s inherent instability. However, this should not be confused with causing the industry’s instability. There is no clear evidence that bankruptcy has contributed to the industry’s economic ills, including overcapacity and underpricing, and there is some evidence to the contrary. Equally telling is how few airlines that have filed for bankruptcy protection are still doing business. Clearly, bankruptcy has not afforded these companies a special advantage. Bankruptcy has become a means by which some legacy airlines are seeking to shed their costs and become more competitive. However, the termination of pension obligations by United Airlines and US Airways has had substantial and wide-spread effects on the PBGC and thousands of airline employees, retirees, and other beneficiaries. Liquidity problems, including $10.4 billion in near term pension contributions, may force additional legacy airlines to follow suit. Some airlines are seeking legislation to allow more time to fund their pensions. If their plans are frozen so that future liabilities do not continue to grow, allowing an extended payback period may reduce the likelihood that these airlines will file for bankruptcy and terminate their pensions in the coming year. However, unless these airlines can reform their overall cost structures and become more competitive with low cost competition; this will be only a temporary reprieve. This concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact JayEtta Hecker at (202) 512-2834 or by e-mail at heckerj@gao.gov. Individuals making key contributions to this testimony include Paul Aussendorf, Anne Dilger, Steve Martin, Richard Swayze, and Pamela Vines. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 2001, the U.S. airline industry has confronted unprecedented financial losses. Two of the nation's largest airlines--United Airlines and US Airways--went into bankruptcy, terminating their pension plans and passing the unfunded liability to the Pension Benefit Guaranty Corporation (PBGC). PBGC's unfunded liability was $9.6 billion; plan participants lost $5.2 billion in benefits. Considerable debate has ensued over airlines' use of bankruptcy protection as a means to continue operations, often for years. Many in the industry and elsewhere have maintained that airlines' use of this approach is harmful to the industry, in that it allows inefficient carriers to reduce ticket prices below those of their competitors. This debate has received even sharper focus with pension defaults. Critics argue that by not having to meet their pension obligations, airlines in bankruptcy have an advantage that may encourage other companies to take the same approach. GAO is completing a report for the Committee due later this year. Today's testimony presents preliminary observations in three areas: (1) the continued financial difficulties faced by legacy airlines, (2) the effect of bankruptcy on the industry and competitors, and (3) the effect of airline pension underfunding on employees, airlines, and the PBGC. U.S. legacy airlines have not been able to reduce their costs sufficiently to profitably compete with low cost airlines that continue to capture market share. Internal and external challenges have fundamentally changed the nature of the industry and forced legacy airlines to restructure themselves financially. The changing demand for air travel and the growth of low cost airlines has kept fares low, forcing these airlines to reduce their costs. They have struggled to do so, however, especially as the cost of jet fuel has jumped. So far, they have been unable to reduce costs to a level with their low-cost rivals. As a result, legacy airlines have continued to lose money--$28 billion since 2001. Although some industry observers have asserted that airlines undergoing bankruptcy reorganization contribute to the industry's financial problems, GAO found no clear evidence that historically airlines in bankruptcy have financially harmed competing airlines. Bankruptcy is endemic to the industry; 160 airlines filed for bankruptcy since deregulation in 1978, including 20 since 2000. Most airlines that entered bankruptcy have not survived. Moreover, despite assertions to the contrary, available evidence does not suggest that airlines in bankruptcy contribute to industry overcapacity or that bankrupt airlines harm competitors by reducing fares below what other airlines are charging. While bankruptcy may not be detrimental to rival airlines, it is detrimental for pension plan participants and the PBGC. The remaining legacy airlines with defined benefit pension plans face over $60 billion in fixed obligations over the next 4 years, including $10.4 billion in pension obligations--more than some of these airlines may be able to afford given continued losses. While cash from operations can help fund some of these obligations, continued losses and the size of these obligations put these airlines in a sizable liquidity bind. Moreover, legacy airlines still face considerable restructuring before they become competitive with low cost airlines. |
BLM leases federal lands to private companies for the production of onshore oil, gas, and coal resources, generally through a competitive bidding process. BLM offers for lease parcels of land nominated by industry and the public, as well as some parcels that BLM itself identifies. If BLM receives any bids, called bonus bids, on an offered lease that are at or above a minimum acceptable bid amount, the lease is awarded to the highest bidder, and, for oil and gas, a lump-sum payment in the amount of the bid is due to ONRR when BLM issues the lease. For coal, the winning bidder pays the bonus bid in five equal payments, with one of the payments being paid at the time of the lease sale. For oil and gas leases, BLM requires a uniform national minimum acceptable bid of $2 per acre. For coal leases, BLM requires a minimum bid of $100 per acre, and the bid must meet or exceed BLM’s estimate of the fair market value of the lease. In addition to the competitive bidding process, companies may obtain leases through two additional processes. For oil and gas, tracts of land that do not receive a bid in the initial offer are made available noncompetitively the next day and remain available for noncompetitive leasing for a period of 2 years after the initial competitive auction, with priority given to offers based on the date and time of filing. For coal, companies may request that a certain amount of contiguous land be added to an existing lease in what is called a lease modification process. Lands acquired through lease modification are added to the existing lease without a competitive bidding process, but the federal government must receive the fair market value of the lease of the added lands either by cash payment or adjustment of the royalty applicable to the lands added to the lease. Leases specify a rental rate—a fixed annual charge until production begins on the leased land, or, when no production occurs, until the end of the period specified in the lease. For oil and gas leases, generally the rental rate is $1.50 per acre for the first 5 years and $2 per acre each year thereafter. For coal, the rental rate is at least $3 per acre. Oil and gas parcels are generally leased for a primary term of 10 years, but lease terms may be extended if, for example, oil or gas is produced in paying quantities. Coal parcels are leased for an initial 20-year period and may be extended if certain conditions are met. Once production of the resource starts, the federal government is to receive royalty payments based on a percentage of the value of production—known as the royalty rate. For onshore oil and gas leases, the Mineral Leasing Act of 1920 sets the royalty rate for competitive leases at not less than 12.5 percent of the amount or value of production. However, until January 2017, BLM regulations generally established a fixed royalty rate of 12.5 percent. For noncompetitive leases, the act, as amended, sets the royalty rate at a fixed rate of 12.5 percent. For coal, royalty rates depend on the type of mine—surface or underground. BLM is authorized to establish royalty rates above 12.5 percent for surface mines, but according to agency officials, BLM generally sets the rate at 12.5 percent, the statutory and regulatory minimum royalty rate. For underground mines, BLM sets the rate at 8 percent, the rate prescribed in regulation. Royalties for oil and gas are calculated based on the value of the resource at the wellhead, and any deductions or allowances are taken after the royalty rate is applied. For coal, certain costs are deducted from the price of coal at the first point of sale, including transportation and processing allowances, before the amount is calculated for royalty purposes. The royalty rate paid by the coal company after such allowable deductions have been factored in, along with any royalty rate reductions, is called the effective royalty rate. Federal royalty rates differ from the royalty rates that state governments charge for production on state lands and the rates that companies pay for production on private lands. Table 1 shows federal royalty rates and rates for six states that represented more than 90 percent of federal oil, gas, and coal production in fiscal year 2015. According to state officials, as of March 2017, royalty rates for oil and gas production charged by these states vary but tend to be higher than federal royalty rates, while royalty rates for coal production charged by these states are generally the same as federal rates. Less is known about the royalty rates for production on private lands because of the proprietary nature of lease contracts, but a few published reports suggest that private royalty rates range from 12.5 percent to 25 percent for oil and gas production and from 3 percent to 10 percent for coal production. In fiscal year 2016, approximately 157 million barrels of oil, 3.14 trillion cubic feet of gas, and 295 million tons of coal were produced on federal lands, according to ONRR data. These numbers represented about 6 percent of total U.S. onshore oil production, 10 percent of total U.S. onshore gas production, and 40 percent of total U.S. coal production, according to our analysis of EIA data. The federal government collected approximately $2.5 billion in gross revenue from the production of these resources on federal lands in fiscal year 2016. The majority of this revenue generally has come from royalties—about $2 billion, or 80 percent of total revenues in fiscal year 2016. As figure 1 shows, royalties comprised a larger percentage of the total revenue from oil and gas production than from coal production in 2016. (See appendix I for additional data on production and revenues from oil, gas, and coal development on federal and American Indian lands from fiscal year 2007 through 2016.) Private companies develop oil, gas, and coal on federal lands within the context of broader energy markets, and conditions in those markets have changed. Overall oil and gas production has increased after decades of decline or general stability—between 2008 and 2016, total U.S. oil production increased by 77 percent and gas production increased by 35 percent. During that same period, federal onshore oil production increased by 59 percent while federal onshore gas production declined by 18 percent. According to EIA, almost all of the increase in overall oil and gas production has centered on oil and gas plays located in shale and other tight rock formations, spurred by advances in production technologies such as horizontal drilling and hydraulic fracturing. However, as figure 2 shows, major tight oil and shale gas plays—those plays that, according to EIA data, have represented more than 90 percent of growth in oil and gas development from 2011 to 2016—are mostly located on nonfederal lands. In 2016, about 15 percent of the major tight oil and shale gas plays in the contiguous United States overlapped federal lands, according to our analysis of EIA and the U.S. Geological Survey data. In contrast to oil and gas production, both federal and total U.S. coal production have declined since 2008. Federal coal production declined 19 percent from 2008 to 2015, while total U.S. coal production declined more than 23 percent in the same period. According to EIA, the decline in total U.S. coal production can be attributed to a lower international demand for coal, increased environmental regulations, and low natural gas prices (natural gas is an alternative for coal in the electricity market). As figure 3 shows, about 5 percent of the major coal basins in the contiguous United States overlapped federal lands in 2013, according to our analysis of EIA and U.S. Geological Survey data. Major coal basins that overlap with federal lands are primarily concentrated in the Powder River Basin in parts of Wyoming and Montana. Raising federal royalty rates for onshore oil, gas, and coal could decrease production on federal lands, according to studies we reviewed and stakeholders we interviewed. Increasing royalty rates would increase the total costs for producers, thus making production on federal lands less attractive to companies, according to some stakeholders. Companies may respond by producing less on federal lands and more on nonfederal lands. However, stakeholders disagreed about the extent to which production could decrease because they said other factors may influence energy companies’ development decisions. Oil and gas. We identified two studies—one by the CBO and one by Enegis, LLC—that modeled the effects of different policy scenarios on oil and gas production on federal lands. Both studies suggested that a higher royalty rate could decrease production on federal lands by either a small amount or not at all. The CBO study concluded that raising the royalty rate to 18.75 percent would lead to “reductions in production would be small or even negligible” over 10 years, particularly if the increased federal royalty rate remained equal to or below the royalty rates for production on state or private lands. As discussed above, the current 12.5 percent federal royalty rate is generally the same or lower than rates charged by the six states in which more than 90 percent of federal oil and gas was produced in fiscal year 2015. In addition, the Enegis, LLC, study showed that demand for new federal competitive leases—or the extent to which oil and gas companies would compete for new leases—would generally decrease over 25 years if the royalty rate were raised to 16.67 percent, 18.75 percent, or 22.5 percent. For each of these three royalty rate increases, the study examined several different scenarios that varied with respect to key factors, including company costs and company responses. The study showed declines in production under all scenarios except those in which companies completely absorbed the higher costs resulting from higher royalty rates. In scenarios in which companies could absorb the costs—potentially in market conditions in which higher oil and gas prices help buffer companies from the effects of increased royalty rates—there would be no change in production levels. The three increased royalty rates modeled resulted in oil production declines ranging from 0 barrels to approximately 70 million barrels over 25 years (or, about 2.8 million barrels per year—the equivalent of about 1.8 percent of fiscal year 2016 onshore federal oil production). The three increased royalty rates modeled also resulted in gas production declines over 25 years ranging from 0 cubic feet to 85 billion cubic feet (or about 3.4 billion cubic feet per year—the equivalent of less than 1 percent of onshore federal gas production in fiscal year 2016). Coal. We also identified two studies that analyzed the effects of different policy scenarios on coal production on federal lands. The first study, by the CEA, examined how raising the federal royalty rate could affect coal production on federal lands after 2025 using a series of scenarios. Under the first scenario, equivalent to raising royalty rates to 17 percent in 2025, the study predicted that federal production would decrease by 3 percent once the changes were fully implemented. The other two scenarios, each equivalent to raising royalty rates to 29 percent in 2025, predicted that federal production would decrease by 7 percent. The second study, by Mark Haggerty, Megan Lawson, and Jason Pearcy, modeled an increase in the effective royalty rate, which is the rate companies actually pay after processing and transportation allowances are factored in. The study found that the modeled increase in the effective royalty rate led to a decrease in federal coal production of less than 1 percent per year. Results of the two studies differed in how an increase in coal royalty rates might affect nonfederal coal production. The CEA study determined that an increase in federal royalty rates would raise the national price of coal, improving the competitiveness of nonfederal coal and slightly increasing nonfederal coal production. According to the CEA study, coal mines in Wyoming and Montana—representing more than 86 percent of federal coal production in fiscal year 2015—are some of the largest, most productive, and lowest-cost mines. According to EIA, in 2015 the average market price of coal from the Appalachian region, which comes primarily from production on state and private lands, was $60.61 per ton, while the average market price of coal from the Powder River Basin, where the majority of federal coal is produced, was $13.12 per ton. We previously reported that underground mining, which is mostly concentrated in the eastern region, is more costly than surface mining, resulting in a higher sale price. At the same time, eastern coal has more heat, or energy, content per ton than western coal, which raises the value of eastern coal. CEA concluded that raising the royalty rate would decrease this price gap between Appalachian and Powder River Basin coal, thus making Appalachian and other nonfederal coal slightly more competitive. The study by Haggerty, Lawson, and Pearcy states that substitution between federal and nonfederal coal could occur, but is unlikely for several reasons, including federal ownership in western states and the inherent difference in the qualities of coal. The study states that substitution between federal and nonfederal coal could occur if federal and nonfederal coal are in close proximity. However, the authors note that where federal ownership of coal dominates, in states like Wyoming, Montana, and Colorado where the majority of federal coal is produced, states tend to adopt federal policy changes. Also according to the study, transportation from the mine to the power plant is highly specialized, and power plants are engineered to maximize efficiency of the specific type of coal in the region. Switching from one type of coal to another could involve substantial conversion costs for coal power plants. Stakeholders we interviewed suggested that several factors could influence the extent to which oil, gas, and coal production might be affected if federal royalty rates were increased, including the following. Market conditions and prices. Some stakeholders noted that market conditions and prices play an important role in determining whether raising federal royalty rates could affect production on federal lands. BLM officials suggested that raising federal royalty rates is less likely to have a negative effect on production when oil and gas prices are high. For example, increasing royalty rates from 12.5 percent to 16.67 percent would increase the cost of producing oil by about $2 a barrel at oil prices as of March 2017. In addition, according to a few stakeholders we interviewed and a 2015 report by the Congressional Research Service, any negative effect on production from higher rates could be limited to or affect areas with marginal oil and gas wells, which are usually wells with low production rates and/or higher production costs. As for coal, some stakeholders said that in an already challenging market, increased costs could further discourage production. According to EIA data, total U.S. coal production declined 23 percent from 2008 to 2015. In a 2012 report, we found that various market and regulatory factors may influence the future use of coal, including the price of natural gas, demand for electricity, and environmental regulations. A few stakeholders we interviewed said there has been little interest in further coal development in their regions, which include the western and midwestern regions of the country. Since fiscal year 2012, the number of coal lease sales on federal lands has generally declined. We previously reported that there was limited competition for coal leases because of the significant capital investment and time required; additionally, from January 2016 to March 2017 the Secretary of the Interior placed a pause on significant new federal coal leasing decisions, with limited exemptions and exclusions. Cost advantages of different resources. A few stakeholders told us that the competitiveness of federal lands for development depends less on the royalty rate charged and more on the location of the best resources—such as areas with low exploration and production costs. For example, as discussed above, most of the areas with major U.S. tight oil and shale gas plays and major U.S. coal basins do not overlap with federal lands. A few stakeholders suggested that an increase in the federal royalty rates for coal would not cause companies to switch from federal to nonfederal coal because of the cost advantages of federal coal, which is primarily concentrated in surface mines in the West. According to EIA data, all coal extracted from the Powder River Basin in 2015 was from surface mining, which we previously reported has lower extraction and production costs. In contrast, the majority of coal production from the Appalachian region in 2015 was from underground production, which we previously reported is more costly to extract. Increasing the royalty rate on federal lands would not cause operators to switch from federal to nonfederal coal, according to a few stakeholders, because companies producing coal on federal lands would still have a cost advantage over companies producing coal on nonfederal lands. Regulatory burden of federal development. Some stakeholders we spoke with stated that there is already a higher regulatory burden for oil and gas companies to develop resources on federal lands than on nonfederal lands, and one stakeholder noted that an increase in federal royalty rates would decrease the competitiveness of federal lands versus state or private lands. In addition, BLM officials noted that about half the public comments BLM received through its 2015 Advance Notice of Proposed Rulemaking also noted there is a higher regulatory burden on federal lands. According to BLM officials, when federal and nonfederal coal are located on adjoining tracts the cost of production will be identical unless the nonfederal land has a different royalty rate, which officials say is unlikely. Assuming the royalty rate is the same, officials stated that the main difference between federal and nonfederal coal is the additional regulatory burden of producing on federal lands. In addition, a few stakeholders stated that companies may avoid mining federal lands for coal when possible in order to avoid the required environmental assessments, which add time to the leasing process. Officials from two state offices we interviewed said that the history of increasing royalty rates for oil and gas production on state lands suggests that increasing the federal royalty rate would not have a clear impact on production. In particular, officials from Colorado and Texas said that they have raised their state royalty rates without a significant effect on production on state lands. In February 2016, Colorado increased its royalty rate for oil and gas production from 16.67 percent to 20 percent, and, according to state officials, there had been no slowdown in interest in new leases as of August 2016. In fact, Colorado state officials said they were unsure whether the higher royalty rate played much of a role in companies’ decision making. Additionally, Texas officials told us that over 30 years ago, Texas began charging a 25-percent royalty for most oil and gas leases on state lands, and this increase has not had a noticeable impact on production or leasing. Officials at BLM said about half of the public comments they received through BLM’s 2015 Advance Notice of Proposed Rulemaking suggested that an increase in royalty rates would not have a clear impact on production. Raising federal royalty rates for onshore oil, gas, and coal could increase overall federal revenues, according to studies we reviewed and stakeholders we interviewed. Higher rates could have two opposing effects on federal revenues. First, as discussed above, raising royalty rates could lead to decreased production on federal lands, and, consequently, decreased revenues. Second, revenues would increase on any production that does occur because of higher royalty rates on that production. The studies we reviewed show that raising federal royalty rates could increase federal revenues for oil, gas, and coal. Some stakeholders we interviewed said any effects on federal revenue would depend on how increasing royalty rates for oil, gas, and coal would affect bonus bid revenue, while others said overall market conditions, among other factors, need to be considered. Oil and gas. The studies we reviewed for oil and gas estimate that raising the federal royalty rate could increase net federal revenue between $5 million and $38 million per year (equivalent to around 0.7 percent to around 5.2 percent of net oil and gas royalties in fiscal year 2016). According to the CBO study, the effect on federal revenue would initially be small but would increase over time because a change in the royalty rate would apply only to new leases and the affected parcels would not go into production immediately. For example, CBO found that 6 percent of royalties collected in 2013 came from leases issued in the previous 10 years. CBO estimated that if the royalty rate for onshore oil and gas parcels were raised from 12.5 percent to 18.75 percent, net federal revenue would increase by $200 million over the first 10 years, and potentially by much more over the following decade, depending on market conditions. Similarly, according to the Enegis study, net federal revenues would increase under the scenarios that modeled raising the royalty rate to 16.67 percent, 18.75 percent, or 22.5 percent. Under these scenarios, estimated increases in net federal revenue range from $125 million to $939 million over 25 years. Coal. Both studies for coal also suggested that a higher royalty rate could lead to an increase in federal revenues. For example, the modeling scenarios in the CEA study that raised the royalty rate to the equivalent of 17 percent or 29 percent predicted a range of increases in government revenues from $0 to $730 million annually after 2025, with approximately half of that revenue going to the federal government. By comparison, in fiscal year 2016, the federal government collected more than $536 million in coal royalty payments, according to ONRR data. The revenue range included zero to take into account the possibility that bonus bids could be lost entirely, but the study stated that this was an extremely conservative assumption, and that the increase in royalty revenue would be vastly larger than any decrease in bonus bid revenue. The study by Haggerty, Lawson, and Pearcy suggested that total average royalty revenues could increase by $141 million per year if the effective royalty rate were raised. This study did not consider the effect on bonus bid revenue from a royalty rate increase. Stakeholders we interviewed also suggested that the effect on bonus bid revenue could influence the extent to which raising federal royalty rates would increase revenues from oil, gas, and coal production. For example, some stakeholders stated that companies would be more likely to offer lower bonus bids if they had to pay higher royalty payments, but a few stakeholders believed that the net impact on federal revenue would be minimal because royalties are a more significant portion of total revenues than bonus bids. For oil and gas, royalties could offset losses from other revenue sources, such as bonus bids and rents. Although royalties also constitute the majority of revenue for coal, bonus bids represent a larger percentage of total revenue in comparison with oil and gas revenue. For example, in fiscal year 2016 only 8 percent of total revenue from oil and gas development on federal lands was from bonus bids, while in the same year the comparable figure for coal was 42 percent. However, a few stakeholders said that any decrease in bonus bids from an increase in coal royalty rates would likely be offset by a larger increase in royalty revenue. In addition, BLM officials stated that raising the royalty rate could make some federal coal uneconomical to mine, resulting in fewer royalty payments to the federal government. BLM officials stated that an operator can justify a capital investment to produce coal on federal lands if the potential for revenue outweighs the cost of production. According to officials, increasing the royalty rate would add to the cost of production, which could cause an operator to bypass federal coal, thus causing the government to miss out on revenue. As discussed above, some stakeholders said any effects on federal revenue would depend on how increasing royalty rates for oil, gas, and coal would affect bonus bid revenue, and others said overall market conditions, among other factors, need to be considered. We provided a draft of this report to Interior for review and comment. The agency provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Figure 4 shows trends in onshore oil, gas, and coal production and revenue on federal lands over the last 10 years. Tables 2 and 3 show which federal agencies have ownership over the associated federal surface lands that overlap the major tight oil and shale gas plays and major coal basins, as well as which major plays and basins are on tribal lands. Tables 4, 5, and 6 show royalty and other revenues and oil, gas, and coal production on federal lands; on American Indian lands; and, for comparison, in federal offshore areas for fiscal years 2015 and 2016. In addition to the contact named above, Quindi Franco (Assistant Director), Richard Burkard, Greg Campbell, Colleen Candrl, Tara Congdon, Cindy Gilbert, Michael Kendix, Courtney Lafountain, Jessica Lewis, John Mingus, Cynthia Norris, Caroline Prado, Sara Sullivan, Kiki Theodoropoulos, Barbara Timmerman, and Amy Ward-Meier made key contributions to this report. | In fiscal year 2016, the federal government collected about $2.5 billion in revenue associated with onshore oil, gas, and coal production on federal lands, including about $2 billion from royalties. Federal royalty rates sometimes differ from the rates states charge for production on state lands. For example, state oil and gas rates tend to be higher than federal royalty rates and state coal rates are generally the same as federal rates in the six states representing more than 90 percent of federal oil, gas, and coal production in fiscal year 2015. The explanatory statement accompanying the Consolidated Appropriations Act for fiscal year 2016 includes a provision for GAO to review issues related to royalty rates. This report describes what is known about how raising federal royalty rates would affect (1) oil, gas, and coal production on federal lands and (2) the federal revenue associated with such production. GAO reviewed an extensive list of studies and selected for more in-depth review four that modeled the effects of raising federal royalty rates—one study conducted by the Congressional Budget Office, one by the Council of Economic Advisers in the Executive Office of the President, one prepared for the Bureau of Land Management, and one by researchers. GAO also interviewed officials from federal and state agencies, industry groups, non-governmental organizations, academia, and other knowledgeable stakeholders. GAO is not making recommendations in this report. Interior reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate. Raising federal royalty rates—a percentage of the value of production paid to the federal government—for onshore oil, gas, and coal resources could decrease oil, gas, and coal production on federal lands but increase overall federal revenue, according to studies GAO reviewed and stakeholders interviewed. However, the extent of these effects is uncertain and depends, according to stakeholders, on several other factors, such as market conditions and prices. Production . One study GAO reviewed found that oil and gas production could decrease by less than 2 percent per year if royalty rates increased from their current 12.5 percent to 22.5 percent, based on fiscal year 2016 production data. Another study stated the effect on production could be “negligible” over 10 years if royalty rates increased to 18.75 percent, particularly if the increased federal royalty rate remained equal to or below the royalty rates for production on state or private lands. Regarding coal, one study suggested that raising the federal royalty rate for coal to 17 percent would decrease production on federal lands by up to 3 percent after changes were fully implemented after 2025, while a second study said that increasing the effective rate—the rate actually paid by companies after processing and transportation allowances have been factored in, along with any royalty rate reductions—might decrease production on federal lands by less than 1 percent per year. Some stakeholders said that several other factors could influence the extent to which oil, gas, and coal production might decline. For example, some stakeholders said current market conditions, the cost advantages of different resources, and the regulatory burden associated with production on federal lands could influence the extent to which production might decline. Revenue . The oil and gas studies that GAO reviewed estimated that raising the federal royalty rate could increase net federal revenue between $5 million and $38 million per year. One of the studies stated that net federal revenue would increase under three scenarios that modeled raising the royalty rate from the current 12.5 percent to 16.67 percent, 18.75 percent, or 22.5 percent, while the other study noted that the effect on federal revenue would initially be small but would increase over time. Both coal studies suggested that a higher royalty rate could lead to an increase in federal revenues. One of the studies suggested that raising the royalty rate to 17 percent or 29 percent might increase federal revenue by up to $365 million per year after 2025. The other study suggested that increasing the effective rate could bring in an additional $141 million per year in royalty revenue. Stakeholders GAO interviewed cited other factors that could influence the extent to which raising federal royalty rates could increase revenues—in particular, how bonus bids, another revenue source, could be affected. Some of the stakeholders stated that companies would be more likely to offer lower bids to obtain a lease for the rights to extract resources if they had to pay higher royalties. |
Saudi Arabia is known as the birthplace of Islam and is the site of Islam’s two holiest shrines located in Mecca and Medina. (See figure 1 for a map of Saudi Arabia.) Millions of Muslims from all over the world visit Mecca to undertake the pilgrimages of hajj and umrah. Like hajj, charitable giving, or zakat, is one of the five pillars or duties of Islam. Zakat, a form of tithe or charity payable for those in need, is an annual flat rate of 2.5 percent of a Muslim’s assessable capital. Zakat is broader and more pervasive than Western ideas of charity, functioning as a form of income tax, educational assistance, and foreign aid. Islam is a source of political legitimacy and guidance for the Saudi government, and the country is governed on the basis of Islamic law or Sharia. As far back as the mid-eighteenth century, the founder of the Saudi ruling dynasty, Muhammad bin Saud, allied himself with the conservative Muslim scholar, Muhammad bin Abdul Wahhab. The teachings of Abdul Wahhab, which called for a strict interpretation of Islam, form the basis of Islam as practiced by the majority of people in Saudi Arabia. The Saudi government is a monarchy headed by the king of Saudi Arabia, who also serves as the Prime Minister and the Custodian of the Two Holy Mosques in Mecca and Medina. The current king is King Abdullah bin Abdulaziz Al-Saud. The king is expected to retain consensus of important elements in Saudi society, such as the Saudi royal family and religious leaders. The Saudi royal family consists of thousands of members, some of whom head government ministries. In the mid-1940s, Saudi Arabia started large scale oil production, and by the 1970s an increase in oil prices led to a rapid increase in Saudi Arabia’s per capita income, making it comparable to that of developed countries during this time period. In part, this prosperity, combined with the Islamic religious obligation toward charitable giving or zakat, led to the establishment of charitable organizations headquartered in Saudi Arabia, known as multilateral charitable organizations. These organizations— which include the Muslim World League and the World Assembly of Muslim Youth—were established in the 1960s and 1970s by Saudi royal decree to spread Islam and provide humanitarian assistance around the world, and received funds from the Saudi government and citizens. Relations between the United States and Saudi Arabia have a long historical context. Since the establishment of the modern Saudi state in 1932, and throughout the Cold War, the governments of the United States and Saudi Arabia developed a relationship based on shared interests, including energy production and combating communism. For instance, both Saudi Arabia and the United States became major supporters of the Afghan mujahideen’s struggle against the Soviet invasion in 1979. However, U.S. foreign policies related to the Middle East, such as the Israeli-Palestinian conflict, have at times tested U.S.-Saudi government relations and fueled a growing anti-American sentiment among some segments of the Saudi population, as well as in many countries with predominantly Muslim populations. (See figure 2 for more information on the historical development of U.S.-Saudi relations.) As we have reported previously, this negative perception of U.S. foreign policy, as well as other factors such as economic stagnation, a disproportionate youth and young adult population, and repressive and corrupt governments in certain Middle Eastern countries, have contributed to the global spread of an extremist ideology promoting hatred, intolerance, and violence that threatens U.S. national security interests. According to the U.S. government and experts, some Saudi individuals and charitable organizations have knowingly or unknowingly provided financial assistance for terrorism and violent extremism. For example, according to a report by the 9/11 Commission, some charitable organizations, such as the Saudi-based Al Haramain Islamic Foundation, had been exploited by extremists as funding mechanisms to further their goal of violence against non-Muslims. U.S. government and other expert reports have linked some Saudi donations to the global propagation of religious intolerance and support to terrorist activities. For example, Osama bin Laden and Al Qaeda have Saudi roots and accumulated millions of dollars using legitimate charities, nongovernmental organizations, and mosques, as well as businesses such as banks and other financial institutions, to help raise and move their funds. However, experts agree that it is difficult to determine the extent to which donors are aware of the ultimate disposition of the funds provided. Although the Saudi government took actions to combat terrorists following the September 11, 2001, attacks, Al Qaeda’s attacks against Saudi and U.S. citizens in Saudi Arabia in 2003 and 2004 marked a turning point in the Saudi government’s efforts to combat terrorism and terrorism financing, as the Saudi government came to see Al Qaeda as a threat to the Saudi regime. Between 2003 and 2005, the Saudi government reported that it took a number of actions to combat terrorism and terrorism financing within the Kingdom, some with U.S. assistance, such as increasing the size, training, and professionalism of Saudi security forces. The Saudi and U.S. governments also undertook joint designations of several branches of the Al Haramain Foundation as financiers of terrorism under UN Security Council Resolution 1267. (See appendix III for key Saudi agencies involved in efforts to combat terrorism and terrorism financing within Saudi Arabia.) Since 2005, several U.S. agencies have conducted training and provided technical assistance to improve the capacity of the Saudi government to combat terrorism and its financing. These efforts have included enhancing the investigative capability of Saudi ministries and assessing the security of Saudi oil installations, among others. U.S. agencies involved in these efforts include DOD, DOE, DHS, DOJ, State, and Treasury, as well as the intelligence community. (See appendix IV for U.S. agencies providing training and technical assistance to Saudi Arabia to counter terrorism and terrorism financing.) The Congress has enacted several laws that require U.S. agencies to report on terrorism and terrorism financing issues, including those related to U.S. collaboration with Saudi Arabia. (See table 1 for the selected legislation and its requirements.) In January 2008, in response to the Implementing Recommendations of the 9/11 Commission Act of 2007, State submitted to the Congress a report on the U.S. strategy to collaborate with and assist Saudi Arabia in areas including countering terrorism and terrorism financing. The goals and objectives contained in the January 2008 document coincide with the plans for collaboration with Saudi Arabia contained in State’s Mission Strategic Plans (MSP) for Saudi Arabia for fiscal years 2006 to 2009. To measure progress toward the goal of building an active antiterrorist coalition, the MSPs contain a number of performance targets; however, some of the targets relating to countering terrorism financing were removed, even though U.S. agencies continue to work on those issues in collaboration with the Saudi government. Following the September 11, 2001, terrorist attacks, the U.S. mission in Saudi Arabia reported expanding its ongoing efforts to collaborate with the government of Saudi Arabia to combat terrorism. State’s MSPs, previously called Mission Performance Plans, for fiscal years 2006 to 2009 contain goals related to expanding the Saudi government’s ability to counter terrorism and preventing financial support to extremists. According to State documents, the MSP is developed each year by the overseas missions to facilitate long-term diplomatic and assistance planning and provide a strategic plan to set country-level U.S. foreign policy goals, resource requests, and performance targets. State reports that its Washington-based bureaus draw on MSPs to gauge the effectiveness of policies and programs in the field and formulate requests for resources. In January 2008, State submitted to the Congress a report on U.S. strategy for collaborating with Saudi Arabia on countering terrorism and terrorism financing in response to the Implementing Recommendations of the 9/11 Commission Act of 2007. According to this document, the goal of strengthening the U.S. government’s counterterrorism partnership with Saudi Arabia is to be achieved through bilateral cooperation to enhance the Saudi government’s ability to combat terrorists and to prevent financial support to extremists. The goals and objectives contained in the January 2008 document coincide with the plans for collaboration with Saudi Arabia contained in MSPs for Saudi Arabia. According to State officials, DOD and the Office of the Director of National Intelligence approved the 2008 document. Moreover, officials from various agencies we met with—including Treasury, DOJ, and DHS—noted they are active participants in interagency strategic discussions on collaboration with Saudi Arabia regarding countering terrorism and terrorism financing. According to Saudi officials, the Saudi government agrees with U.S. goals to counter terrorism and terrorism financing in Saudi Arabia. Additionally, Saudi officials told us that generally there has been strong collaboration between U.S. and Saudi agencies, highlighting areas such as information sharing and training of security forces. However, Saudi officials noted some concerns related to U.S. implementation of certain efforts to counter terrorism financing. For example, Saudi officials expressed concern about some designations of individuals and organizations as supporters of terrorism, specifically suggesting that these designations may violate the right to a fair legal process. Additionally, Saudi officials noted concerns regarding past public statements by senior Treasury officials, such as those in April 2008, that individuals based in Saudi Arabia are a top source of funding for Al Qaeda and other terrorist organizations. However, according to Saudi officials, since Treasury placed a deputy attaché in Riyadh, Saudi Arabia, in December 2008, there has been a significant increase in information sharing and decrease in issues of concern. In addition, the Saudi government has developed its own strategy to combat terrorism, which Saudi officials characterized as focusing on three areas or pillars: “men, money, and mindset.” The “men” pillar of the strategy focuses on arresting or killing terrorists. The “money” pillar of the strategy focuses on measures to counter terrorism financing, such as tightening controls on financial transactions and cash couriers, as well as imposing restrictions on Saudi-based charities. Finally, the “mindset” pillar of the strategy focuses on preventing extremism by addressing the ideology that is used to recruit and indoctrinate potential terrorists. Saudi officials stated that the “mindset” pillar is the most challenging aspect of its counterterrorism strategy and will be a long-term challenge. Nonetheless, Saudi officials told us the Saudi government is committed to combating extremist ideology through programs such as public information campaigns and terrorist rehabilitation programs. The MSP for Saudi Arabia contains goals, performance indicators, and performance targets for counterterrorism efforts in Saudi Arabia. While the MSP is a State document, officials from DOD, DHS, DOJ, and Treasury told us they were aware of relevant portions of the document. According to the fiscal year 2009 MSP for Saudi Arabia, the highest priority U.S. goal is to strengthen its antiterrorist coalition with Saudi Arabia. Performance of this goal is measured by the indicator “Saudi effectiveness on measures to combat terrorism and terrorism financing activities.” To build an active antiterrorist coalition with the government of Saudi Arabia, the U.S. Mission to Saudi Arabia focuses on objectives related to (a) enhancing the Saudi government’s ability to combat terrorists, and (b) preventing financial support to extremists. To measure progress toward the goal of building an active antiterrorist coalition, the MSP contains a number of performance targets each fiscal year. (See figure 3 for MSP objectives and targets for fiscal years 2006 to 2009.) On an annual basis, the U.S. Mission in Saudi Arabia releases an updated MSP, which may contain revised performance targets related to building an active antiterrorist coalition with Saudi Arabia. State officials told us that MSP performance targets are periodically revised to ensure each fiscal year’s targets are most relevant to assessing progress toward U.S.- Saudi efforts to counter terrorism and terrorism financing. Specifically, between fiscal years 2006 and 2009, certain performance targets were added, removed, or updated to reflect the change in conditions in Saudi Arabia. For instance, new performance targets were added for fiscal year 2009 that did not exist in previous years, including enhancing Saudi- Yemeni cooperation and conducting trials for terror suspects. In addition, the fiscal year 2008 target related to the public condemnation of terrorism by the Saudi government and religious leaders was omitted in fiscal year 2009 because, according to State officials, it had been met. However, some targets related to preventing financial support to extremists were removed from the MSP, even though U.S. agencies reported that efforts to address these issues are important and ongoing. Specifically, a performance target related to the establishment of a Saudi Charities Commission existed in the MSP from fiscal years 2006 to 2008, but was removed for fiscal year 2009. According to State officials, the performance target related to the Charities Commission was dropped because State determined that other strategies to regulate charitable organizations in Saudi Arabia might be more effective than a commission. Likewise, a target related to implementation of cash courier regulations was part of the MSP in fiscal years 2006 and 2007, but was removed in fiscal years 2008 and 2009. According to State, the performance target related to cash courier regulations was dropped because the Saudi government had made progress instituting and enforcing these regulations. However, according to officials from DHS—which takes the lead on issues such as cash couriers—they were not consulted on the decision to remove the target related to cash couriers and stated that this target should be reinstated in the MSP. As a result, the MSP lacks targets against which U.S. agencies can monitor and assess performance in certain areas of U.S. concern related to countering terrorism financing. According to State and Treasury officials, even though targets related to these activities were removed from MSPs, enforcement of regulations to prevent terrorism financing are still important U.S. goals, which U.S. agencies pursue through diplomatic or training activities. U.S. agencies and Saudi officials report progress countering terrorism and terrorism financing within Saudi Arabia, but noted challenges, particularly those related to preventing the flow of alleged financial support to extremists outside Saudi Arabia. In April 2009, the U.S. embassy assessed progress towards its goal of building an active antiterrorist coalition with Saudi Arabia as “on target.” With regard to counterterrorism efforts, U.S. and Saudi officials report progress enhancing the Saudi government’s ability to combat terrorists, and assess that these efforts have disrupted Al Qaeda’s terrorist network within Saudi Arabia. While citing progress, U.S. and Saudi officials noted Saudi Arabia’s neighbor, Yemen, is emerging as a base from which Al Qaeda terrorists can launch attacks against U.S. and Saudi interests. With regard to preventing financial support to extremists, U.S. and Saudi officials also report progress, citing, among other examples, the Saudi government’s implementation of cash courier regulations, ban on the transfer of charitable funds outside the Kingdom without government approval, and arrest and prosecution of individuals providing ideological or financial support to terrorism. Despite these gains, U.S. officials remain concerned about the ability of Saudi individuals and multilateral charitable organizations, as well as other individuals visiting Saudi Arabia, to support terrorism and violent extremism outside Saudi Arabia. Moreover, both Saudi and U.S. officials cited limited Saudi enforcement capacity and terrorist financiers’ use of cash couriers as challenges to Saudi efforts to prevent financial support to extremists. U.S. and Saudi officials report progress enhancing the Saudi government’s ability to combat terrorists and assess that Saudi efforts have disrupted Al Qaeda’s terrorist network within Saudi Arabia. The U.S. embassy’s MSP for fiscal year 2008 lists three performance targets related to enhancing the Saudi government’s ability to combat terrorists. These targets specifically pertain to (1) prevention of successful terrorist strikes in Saudi Arabia, (2) improved U.S.-Saudi cooperation on critical infrastructure protection, and (3) public condemnation of terrorist activities by Saudi government and religious leaders. U.S. officials report progress on these performance targets in fiscal year 2008, as well as the completion of related targets in fiscal years 2006 and 2007. While citing progress, U.S. and Saudi officials noted Saudi Arabia’s neighbor, Yemen, is emerging as a base from which Al Qaeda terrorists can launch attacks, which could challenge gains made in combating terrorists in Saudi Arabia. (See figure 4 for a summary of reported progress related to MSP performance targets.) Officials from State report that there have been no successful terrorist strikes in Saudi Arabia since February 2007. Officials attributed this outcome, in part, to Saudi law enforcement actions. Experts we interviewed also cited decreased popular support for Al Qaeda, due to Muslim casualties caused by the organization, as a factor in the declining number of attacks. Both U.S. and Saudi officials cited strong intelligence and security cooperation between the U.S. and Saudi governments related to counterterrorism. Moreover, experts with whom we spoke generally agreed that the Saudi government is serious about combating terrorism. State officials report that, while there was a major terrorist incident in Saudi Arabia in August 2009, there have been no attacks on energy infrastructure since 2006 and no successful terrorist strikes in Saudi Arabia since February 2007. On August 27, 2009, a suicide bomber attempted to assassinate Saudi Arabia’s Assistant Minister of Interior for Security Affairs, Prince Mohammed bin Nayef, who is also the head of Saudi Arabia’s counterterrorism efforts. While the bomber, a Saudi national who was on the Saudi government’s “most wanted” list, was killed during the attack, no one else suffered serious injuries. According to the Saudi embassy, the attacker was from a region bordering Yemen and had offered to coordinate the return of Saudi fugitives in Yemen. Al Qaeda in the Arabian Peninsula claimed responsibility for the attack. Officials from State and the Saudi embassy told us that, while a major attack, the strike was not successful, as the Prince was not seriously harmed. Similarly, a White House press release described the attack as unsuccessful. DOD noted, however, that Saudi authorities did not disrupt this plot. The White House press release also stated that the attack underscores the continued threat posed by Al Qaeda and the importance of strong counterterrorism cooperation between the United States and Saudi Arabia. Saudi Arabia has also undertaken law enforcement activities, including arresting and prosecuting terror suspects. According to State reporting, since May 2003, the Saudi government has killed or captured Al Qaeda’s operational Saudi-based senior leadership, as well as most of the network’s key operatives and many of the Kingdom’s most wanted individuals. Between 2003 and 2008, U.S. and Saudi officials report the Saudi government arrested or killed thousands of terrorism suspects, including those suspected of planning attacks on Saudi oil fields and other vital installations. Further, the Saudi government has published three “most wanted” lists, and U.S. and Saudi sources report Saudi Arabia has captured or killed a number of suspects on these lists. In 2008, the Saudi government announced terrorism trials for approximately 1,000 individuals indicted on various terrorism-related charges. In July 2009, the Saudi government announced that 330 suspects had court trials, of whom seven were acquitted and the rest received jail terms ranging from a few months to 30 years. Saudi officials noted that the convictions can be appealed in the Saudi supreme court. U.S. and Saudi officials told us, and State reports, that such trials could help combat terrorist ideologies. Moreover, the U.S. and Saudi governments have implemented joint activities to strengthen Saudi law enforcement capabilities, such as State- led training to strengthen the Saudi government’s antiterrorism investigative management and VIP protection capability. Apart from joint training courses, U.S. officials told us they have close working-level relationships with their Saudi counterparts and noted that Saudi security cooperation is significant. State, Treasury, FBI, DOD, and DHS officials told us that U.S. law enforcement and intelligence agencies have benefited and continue to benefit from Saudi information on individuals and organizations. The former U.S. ambassador to Saudi Arabia described U.S.- Saudi counterterrorism cooperation as among the most productive in the world. U.S. and Saudi officials report progress in cooperation on protecting critical infrastructure targets, such as oil installations, in Saudi Arabia. Officials cited the signing of a technical cooperation agreement between the United States and Saudi Arabia in May 2008, as well as activities to implement the agreement, as evidence of deeper cooperation. U.S. officials told us that the protection of critical Saudi infrastructure, particularly energy production facilities, has been a priority since 2006, following an unsuccessful terrorist attack at the Abqaiq oil facility, one of the world’s largest oil processing facilities located in Saudi Arabia. According to State reporting, critical infrastructure protection is a vital national interest, as a successful terrorist attack that disrupts Saudi oil production would have a devastating impact on the U.S. and global economies. In May 2008, the U.S. Secretary of State and the Saudi Arabian Minister of Interior signed a technical cooperation agreement to provide U.S. technical assistance to Saudi Arabia in the area of critical infrastructure protection. A joint operational entity called the Office of Program Management-Ministry of Interior (OPM-MOI) was established to implement the agreement, consisting of representatives from State, DOD, DOE, and the Saudi Arabian government. The Saudi Arabian government agreed to fully fund the agreement’s implementation, including all expenses incurred by U.S. agencies for services and contractor costs. OPM-MOI is assisting the Saudi government in identifying critical infrastructure vulnerabilities; developing security strategies to protect critical infrastructure; and recruiting and training a new MOI force, the Facilities Security Force, to protect its critical infrastructure. State is the lead agency for the implementation of the agreement, DOE is contributing expertise in conducting facility assessments and developing security strategies for Saudi energy production facilities, and DOD will contribute expertise in training and equipping the Facilities Security Force, which is intended to have more than 35,000 personnel when fully developed. According to U.S. officials, the current focus of OPM-MOI is protecting critical infrastructure related to oil production. However, officials told us OPM-MOI could expand to cooperate with the Saudi government in a number of other areas, including border security, maritime security, and cyber security. Other U.S. agencies, such as DHS and the Coast Guard, are expected to participate in OPM-MOI as the critical infrastructure mission expands. Saudi government and religious leaders have publicly condemned terrorism and terrorism financing. Several high-ranking Saudi government leaders have spoken out against terrorism and extremism. In his remarks at the UN’s Culture of Peace conference in November 2008, King Abdullah bin Abdulaziz noted that “terrorism and criminality are the enemies of every religion and every culture.” In July 2008, the King made similar remarks at the World Conference on Dialogue in Madrid. Many other members of the Saudi government, including the Ministers of Islamic Affairs, Foreign Affairs, and Interior, have issued statements against terrorism and extremism since 2001. Moreover, religious leaders, such as the Grand Mufti of Saudi Arabia, have issued a number of statements critical of terrorism. In the spring of 2008, the Grand Mufti issued a statement on the evils of terrorism and warned Saudi citizens not to listen to those who use religion to promote terrorism. In October 2007, he delivered a sermon cautioning young Saudis not to travel abroad to participate in jihad. In the same speech, he urged Saudi citizens not to finance terrorism and to be mindful of how their charitable contributions are distributed. Later that same year, the Grand Mufti stated that terrorists should be subject to severe punishment in accordance with Islamic law. According to State, other Saudi Islamic scholars and officials also voiced support for the Grand Mufti’s statement. Further, in May 2009, the Second Deputy Prime Minister of Saudi Arabia organized the first national conference on “intellectual security,” which was to address the “intellectual abnormality” that, according to the Saudi government, is “the main reason for terrorism.” The conference resulted in a communiqué stressing the importance of moderation, tolerance, cross- cultural dialogue, nonviolence, and the establishment of national strategies for promoting those values. The declaration pointed out the moderate nature of Islam and warned against the dangers of embracing deviant ideologies. The Deputy Prime Minister also expressed hope that the small number of Saudis participating in deviant groups abroad would renounce their beliefs and return home, noting, Saudi Arabia “is ready to welcome its citizens if they decide to opt for the correct path.” In addition to public statements by Saudi government and religious figures, the Saudi government has implemented a number of domestic activities designed to undermine extremist ideology within the Kingdom. These activities include the following: Initiating a media campaign: According to U.S. and Saudi officials, the Saudi government has implemented an extensive media campaign against extremist thought. State reports this campaign includes the use of advertisements, billboards, text messages and the Internet. (Figure 5 shows examples of Saudi Arabia’s public outreach campaign.) See http://www.gao.gov/media/video/gao-09-883/ for a video of a Saudi government-sponsored TV advertisement designed to counter terrorism. Distributing publications and other materials: Officials from the Saudi Ministries of Islamic Affairs and Interior told us Saudi Arabia had produced a variety of books and pamphlets designed to combat extremist ideology. Saudi officials estimate that approximately 1.8 million books have been prepared as part of this effort. (Figure 6 shows a sample of Saudi literature to combat extremism provided to GAO by Saudi officials.) Additionally, according to Saudi officials, the Saudi government distributes, via mosques and schools, CDs and cassette tapes with lectures and seminars addressing the issues of terrorism and extremism. Monitoring religious leaders: According to State reporting and Saudi officials, the Saudi government continues to monitor the preaching and writings of religious leaders, and to reeducate those who advocate extremist messages. Saudi officials from the Ministry of Islamic Affairs told us their monitoring has covered approximately 20,000 of Saudi Arabia’s estimated 70,000 mosques, including all of the large mosques that hold Friday prayers.Additionally, Saudi officials told us they have held hundreds of seminars and lectures in mosques to help ensure that religious leaders preach a moderate message to the public. Monitoring school teachers: According to officials from the Saudi Ministry of Islamic Affairs, the Saudi government monitors for school teachers who teach extremism. Once identified, such teachers are put through reeducation programs. Monitoring Internet sites: According to Ministry of Interior officials, Saudi authorities are monitoring Internet chat rooms that could be sources of militant recruitment. In September 2008, Saudi authorities reported the arrest of three Saudi citizens and two expatriates for promoting militant activities on Internet forums, engaging online users in dialogue, spreading misleading information, and recruiting youth to travel abroad for inappropriate purposes. According to Saudi reports, the Ministry of Interior called on all Saudis to be vigilant and urged them not to listen to those who promote corruption and sedition. In addition, the Ministry published the online usernames utilized by the suspects. Encouraging dialogue: The Saudi government reports that it supports the activities of the Sakinah (or Tranquility) Campaign, which is an independent nongovernmental organization that engages in dialogue, via the Internet, with Internet users who have visited extremist Web sites. Additionally, in 2008, the King of Saudi Arabia also initiated a series of conferences to promote interfaith dialogues. The first conference was hosted by the Muslim World League in Mecca in June 2008 and consisted of 500 Muslim scholars from around the world. The second conference was held in Madrid, Spain, in July 2008 and included 300 delegates representing different faiths, including Islam, Buddhism, Christianity, Hinduism, and Judaism. Finally, the King of Saudi Arabia was joined by other heads of states at a Special Session of the UN General Assembly on interfaith dialogue in November 2008. The Saudi government operates rehabilitation programs to reeducate those arrested for supporting terrorism or extremism, as well as those returning from the U.S. detention facilities at Guantanamo Bay, to reintegrate them into society. Rehabilitation programs take place both in Saudi prisons and in halfway houses outside of prisons, known as aftercare centers. In March 2009, we visited the aftercare center—called the Mohammed bin Nayef Center for Advisory and Care—in Riyadh and spoke with staff members as well as participants of the program, including those arrested for terrorism and violent extremism in Saudi Arabia and those formerly detained at Guantanamo Bay. Staff members told us the rehabilitation center seeks to “reeducate” its participants by engaging them in religious debates and providing psychological counseling. The program generally consists of one-on-one and group sessions between an offender and religious scholars, psychiatrists, and psychologists about their beliefs. Program officials attempt to persuade the offenders that their religious justification for their actions is based upon a corrupted understanding of Islam. Psychological counseling includes traditional methods as well as activities such as art therapy. Staff members told us the program draws heavily on participants’ family and tribal relations. Family members can visit and telephone the rehabilitation center, and rehabilitation participants are allowed leave to attend family events such as weddings and funerals. For instance, when we visited the center, we spoke with family members visiting their relatives. Those who complete the program, and are deemed eligible for release, are provided social support to assist with their reintegration into society, such as counseling, job opportunities, and stipends. Moreover, these social services are extended to the family and tribal members of released participants, as a way to involve participants’ larger social network in their rehabilitation. Saudi officials told us that individuals generally participate in the program for 6 to 8 months. Saudi officials report such rehabilitation programs, including those in prisons and an aftercare center, have treated 4,300 individuals overall. Specifically, Saudi officials told us that, as of March 2009, the aftercare center had served approximately 250 individuals, with a recidivism rate of about 20 percent. Experts with whom we spoke generally praised the Saudi rehabilitation program, but offered some caution on the methodology used to measure its results. State reports it is monitoring these rates closely. Saudi officials told us that former Guantanamo detainees account for most of the individuals who have recidivated from the aftercare center. For example, in January 2009, two former participants at the center appeared in an Al Qaeda recruiting video filmed in Yemen. One of the individuals in the video has since turned himself in to Saudi authorities. Saudi officials acknowledge such cases illustrate the difficulties associated with assessing which participants should be released from the rehabilitation center, but told us they are working to refine their assessment criteria. U.S. and Saudi officials with whom we spoke told us that, despite the challenges associated with recidivism, Saudi rehabilitation programs have demonstrated some positive results. Moreover, U.S. officials told us these activities, among others, demonstrate the Saudi government’s commitment to undermining extremist ideology within the Kingdom. (See http://www.gao.gov/media/video/gao-09-883/ for a video of GAO’s visit to a Saudi government operated rehabilitation center.) While U.S. and Saudi officials told us that Saudi Arabia had made progress in enhancing its ability to combat terrorists, they assessed that political instability in Yemen, Saudi Arabia’s neighbor, could create challenges to counterterrorism efforts. U.S. and Saudi officials expressed concern that Yemen, due in part to lack of government control and proximity to Saudi Arabia, is emerging as a base from which Al Qaeda terrorists can launch attacks against Saudi and U.S. interests in Saudi Arabia. For example, as noted previously, in August 2009, Al Qaeda in the Arabian Peninsula— which is based in Yemen—claimed responsibility for the failed assassination attempt against Saudi Arabia’s Assistant Minister of Interior for Security Affairs. State has listed Yemen as a terrorist safe haven in its Country Reports on Terrorism since 2005. In April 2009, in its Country Reports on Terrorism, State reported that, despite some successes against Al Qaeda, the response of the government of Yemen to the terrorist threat was intermittent due to its focus on internal security concerns. Moreover, State noted that border security between Saudi Arabia and Yemen remained a problem. Saudi officials also cited political instability in Yemen, as well as the porous border between Yemen and Saudi Arabia, as challenges to their counterterrorism efforts. To address this issue, the Deputy Foreign Minister of Saudi Arabia told us that the Saudi government is providing Yemen assistance in a number of areas, including counterterrorism, education, and health. Saudi officials also stated that Saudi Arabia is building an electronic fence on the Saudi-Yemen border. U.S. and Saudi officials report progress in preventing financial support to extremists, citing improved Saudi regulation and enforcement capacity. Between fiscal years 2006 and 2008, the U.S. embassy’s MSP listed a number of performance targets related to preventing financial support to extremists, which include (1) the United States providing additional training and Saudi banks increasing reporting to the Saudi Financial Investigation Unit, (2) the Saudi government providing accounting of assets seized from individuals and organizations designated as terrorists by the United Nations (UN) 1267 Committee and taking legal action against nationals and groups providing financial or ideological support to terrorists, (3) the Saudi Charities Commission naming senior staff and beginning operations, and (4) the Saudi government implementing and enforcing cash courier regulations. U.S. officials report progress on several of these performance targets. However, U.S. officials remain concerned about the ability of Saudi individuals and multilateral charitable organizations, as well as other individuals visiting Saudi Arabia, to support terrorism and violent extremism outside of Saudi Arabia. U.S. officials also noted that limited Saudi enforcement capacity and terrorist financiers’ use of cash couriers pose challenges to Saudi efforts to prevent financial support to extremists. (For a summary of reported progress related to MSP performance targets from fiscal year 2006 to fiscal year 2008, see figure 7.) In 2005, the government of Saudi Arabia established the Saudi Financial Investigation Unit, which acts as Saudi Arabia’s financial intelligence unit (FIU). Located in the Ministry of Interior, the Saudi FIU receives and analyzes suspicious transaction reports and other information from a variety of sources, such as banking institutions, insurance companies, and government departments. Treasury’s Financial Crimes Enforcement Network, which is the FIU for the United States, and the FBI have provided training to the Saudi FIU since its establishment in 2005. According to Treasury officials, these training activities were designed to build the capacity of the Saudi FIU and assist it in meeting international standards, with the ultimate goal of having the Saudi FIU attain membership in the Egmont Group, an international body of financial intelligence units. Saudi officials told us that the capacity of the Saudi FIU has increased since 2005. For instance, they noted the number of suspicious transaction reports they analyze grew from approximately 38 per month in 2006 to 110 per month in 2008, while the staff of the FIU has grown from approximately 80 individuals in 2006 to 130 individuals in March 2009. Moreover, since 2005, Saudi officials told us the Saudi FIU had conducted information exchanges with FIUs in the United States and a number of countries in the region, and said they expected these exchanges to increase once the Saudi FIU becomes a member of the Egmont Group. Similarly, Treasury officials told us Saudi Arabia’s membership in the Egmont Group would assist in preventing financial support to extremists by facilitating information exchanges between the Saudi FIU and its counterparts. In May 2009, the Saudi FIU met the requirements for membership and became an official member of the Egmont Group. According to U.S. officials and State reporting in 2007 and 2008, the Saudi government has taken legal action against terrorist financiers, based on its laws and regulations related to combating terrorist financing. In 2003, the Saudi Arabian government enacted an anti-money laundering law, which provided a statutory basis for considering money laundering and terrorism financing as criminal offenses. That same year, the Saudi Arabian Monetary Agency (SAMA) issued updated anti-money laundering and counterterrorist financing guidelines for the Saudi banking and nonbank financial system. The guidelines contain a number of provisions, such as requiring that banks (1) have mechanisms to monitor all types of “Specially Designated Nationals” as listed by SAMA, (2) strictly adhere to SAMA guidance on opening accounts and dealing with charity and donation collection, and (3) use software to monitor customers to detect unusual transaction patterns. Additionally, SAMA has also issued “know your customer” guidelines, requiring banks to freeze accounts of customers who do not provide updated account information. In July 2004, members of the Financial Action Task Force (FATF) and the Gulf Cooperation Council assessed Saudi Arabia’s legal and regulatory practices with respect to countering terrorism financing. They found Saudi Arabia to be compliant or largely compliant with most FATF benchmarks—a set of internationally recognized legal and regulatory benchmarks on money laundering and terrorism financing. Since 2007, U.S. and Saudi officials report that the government of Saudi Arabia has arrested and prosecuted a number of individuals suspected of financing terrorism. For instance, State reports that the Saudi government arrested 56 suspected terrorist financiers in 2008 and prosecuted 20 of them. According to Saudi embassy reports, over 40 people were arrested in 2007 for providing financial support to terrorists. As noted earlier in this report, in July 2009, the Saudi government announced the convictions of 330 terrorism suspects who were charged with crime, including affiliation with terrorist organizations as well as facilitating and financing terrorism. U.S. officials we spoke with cited these arrests, as well as those against terrorist cells more generally, as having a disruptive effect on financing networks. Despite these gains, State notes in its most recent Country Report on Terrorism that the United States continues to “urge the government of Saudi Arabia to pursue and prosecute terrorist financiers more vigorously.” Moreover, State’s 2009 International Narcotics Control Strategy Report (INCSR) cited the Saudi government as “partially compliant” on obligations related to UN Security Council resolutions on terrorism financing. UN 1267 requires its signatories to impose certain restrictions on individual or entities associated with Al Qaeda and the Taliban, such as freezing their assets and preventing their entry or transit through signatories’ territories (travel ban). In April 2009, State reported that the Saudi government had taken action against individuals designated by the UN 1267 Committee by freezing their accounts and seizing their assets. Moreover, State reports that SAMA provides the names of suspected terrorists and terrorist organizations designated by the UN 1267 Committee to all financial institutions under its supervision. Related to the UN 1267 travel ban provision, State officials told us that the Saudi government’s enforcement of this provision had not been consistent, particularly during hajj. According to Saudi officials, the Saudi government’s policy is to allow all Muslims to visit the Kingdom during hajj to fulfill their religious obligations and to not arrest individuals during their pilgrimage. However, Saudi officials noted they would share information with the United States about certain individuals, such as those designated under UN 1267, visiting the Kingdom for religious obligation. State and Treasury have reported that some Saudi charitable organizations have been a major source of financing to extremist and terrorist groups. In 2002, the Saudi government announced its intention to establish a National Commission for Relief and Charitable Work Abroad, commonly known as the Charities Commission, to oversee all private Saudi charitable activities abroad. U.S. officials told us, and Saudi officials confirmed, that the Saudi Charities Commission is not operational. Although the Charities Commission is not operational, the Saudi government has established regulations barring Saudi charitable organizations from sending contributions abroad via banks and other formal financial channels without first receiving the approval of the Saudi government. Saudi officials told us these regulations are preferable to a Charities Commission, as they more effectively prevent the flow of funds outside of Saudi Arabia. Similarly, some U.S. officials and experts with whom we spoke stated that the current ban might be preferable to a Saudi Charities Commission, as a ban may be more effective in preventing financial support from reaching extremist groups. Other U.S. officials noted that while more effective oversight of charitable organizations is needed, it is not necessary that such oversight take the form of a Charities Commission. Rules related to the banking activities of charitable organization in Saudi Arabia were first adopted in 2003, and updated in 2008. However, there is some disagreement over whether these rules apply to all Saudi charitable organizations. According to U.S. officials, and State reporting, charitable organizations in Saudi Arabia include organizations termed “charities”— which tend to operate within the Kingdom—and organizations dubbed “multilateral” charitable organizations—which are based in Saudi Arabia but have branches in other parts of the world. Multilateral charitable organizations based in Saudi Arabia include the Muslim World League, the International Islamic Relief Organization (IIRO), and the World Assembly of Muslim Youth. The 2003 rules applied to charitable organizations and prohibited them from making cash disbursements and transferring money outside of Saudi Arabia. U.S. officials have testified that the 2003 regulations did not apply to multilateral organizations. While State reported in the 2006, 2007, and 2008 INCSRs that the government of Saudi Arabia stated the 2003 regulations applied to international charities, each report also contained language recommending the Saudi government enhance its oversight of charities with overseas operations. Despite the ban on charities transferring money outside of Saudi Arabia enacted in 2003, a number of charitable organizations with links to Saudi Arabia have been designated by Treasury since 2005 as financing terrorist activities. In August 2006, Treasury designated branches of the IIRO in Indonesia and the Philippines, as well as a Saudi national employed as the Executive Director of the Eastern Province Branch of IIRO, for facilitating fundraising for Al Qaeda and affiliated terrorist groups. Further, from 2002 to 2004, the United States designated 13 branches of the Al Haramain Islamic Foundation (Al Haramain), including several branches that were designated by both the U.S. and Saudi governments. In 2004, the Saudi government announced that Al Haramain was being dissolved. In June 2008, Treasury noted that despite Saudi government efforts, which had largely prevented Al Haramain from operating in its own name, the organization’s leadership had attempted to reconstitute itself and parts of Al Haramain continued to operate. Thus, in June 2008, Treasury designated all branches of Al Haramain, including its headquarters in Saudi Arabia, for having provided support to Al Qaeda as well as other terrorists and terrorist organizations. After these designations, in rules dated December 2008, the government of Saudi Arabia updated its banking regulations to include specific language related to multilateral organizations, including Muslim World League, IIRO, and World Assembly of Muslim Youth. The rules state that transfers from the accounts of these multilateral organizations to any party outside of the Kingdom shall not be allowed without approval from SAMA, and that any contributions approved for transfer may only be used in ways as specified by SAMA. Saudi officials from SAMA and the Ministry of Foreign Affairs with whom we spoke told us that no charitable contributions, including those from multilateral charitable organizations, can be sent abroad through bank accounts without the approval of the Saudi government. Moreover, Saudi officials told us that, as of July 2009, the Saudi government had not approved any transfer of funds from charities and multilateral charitable organizations to support charitable activities outside of Saudi Arabia. While the Saudi government has not approved the transfer of funds, they have approved overseas transfers of in-kind humanitarian assistance, such as medical supplies or blankets, through the Saudi Red Crescent Society. State reporting acknowledges the Saudi government tightened controls on charitable giving in September 2008. Officials from World Assembly of Muslim Youth confirmed that the ban on charitable contributions leaving Saudi Arabia through a formal financial network applies to their activities, and that contributions to their organization have declined as a result. While the Saudi government has regulations related to the transfer of charitable contributions through the formal financial system, World Assembly of Muslim Youth officials stated they have moved money out of Saudi Arabia by providing cash to individuals or contractors to implement charitable projects outside the Kingdom. State and Treasury officials noted that, despite the tightened regulations on the formal financial system, they are still concerned about the ability of multilateral charitable organizations to move money out of the country, and in the 2009 INSCR, State reports that multilateral organizations operate “largely outside of the strict Saudi restrictions covering domestic charities.” According to U.S. officials and State reporting, a proposal for the Charities Commission is still under review by Saudi officials. However, the most recent MSP for Saudi Arabia no longer contains a performance target related to the operation of a Saudi Charities Commission. U.S. officials told us that, even in the absence of this target, the regulation of charitable organizations is still a priority that U.S. agencies pursue through diplomatic and information-sharing activities with the Saudi government. In late 2005, the Saudi government enacted stricter customs declaration laws that regulate the cross-border movement of cash, jewels, and precious metals. The regulations state that money and gold in excess of 60,000 Saudi riyals (equivalent to $16,000 U.S. dollars) must be declared upon entry and exit from Saudi Arabia using official customs forms. That same year, representatives from DHS’s Immigration and Customs Enforcement (ICE) conducted training for 46 Saudi officials on topics related to bulk cash smuggling. ICE officials told us the goal of this training was to introduce Saudi officials to interdiction and investigative techniques designed to support enforcement of the Saudi customs regulations. In 2007, State reported concerns related to the enforcement of Saudi Arabia’s customs declaration laws. For example, Saudi customs had not issued the necessary declaration forms. State officials with whom we spoke told us that the Saudi government had improved its enforcement of the laws, including distributing declaration forms and posting signs, which we observed during our visit to the Kingdom (see figure 8). Additionally, State has reported that the Saudi government’s new cash courier regulations have resulted in the investigations of several individuals. According to ICE officials, although they have tried to persuade the Saudi government to conduct operational training related to preventing bulk cash smuggling, the Saudi government did not express interest in undertaking such an exercise until 2008. ICE officials noted that, in addition to classroom training, it considers operational training necessary to ensure proper enforcement of customs declaration laws designed to prevent bulk cash smuggling. Moreover, ICE officials said that they do not have information—such as the number of seizures and prosecutions—that would allow them to assess progress with regard to the Saudi government’s efforts to prevent bulk cash smuggling. In its 2009 INCSR, State reported that the Saudi government has adopted stricter regulations; however, information collected by Saudi customs on cash declarations and smuggling is not shared with other governments, and the implementation and effectiveness of the customs regulations remain in question. Despite concerns about the Saudi government’s enforcement of customs declaration laws that include cash courier regulations, the MSP performance target related to cash couriers was removed from the MSP after fiscal year 2007. U.S. officials told us that, despite the omission of the target from the MSP, Saudi implementation and enforcement of cash courier regulations are important U.S. objectives that U.S. agencies pursue through training and diplomatic activities with the Saudi government. Further, in the 2008 and 2009 INCSRs, State reported that, in addition to bulk cash, some instances of terrorist financing in Saudi Arabia have allegedly involved informal mechanisms, such as hawala. U.S. and Saudi officials told us, and State reports, that hawala and money services businesses apart from banks and licensed money changers are illegal in Saudi Arabia. In 2005, State reported that SAMA consolidated the eight largest money changers into a single bank. Further, State notes that Saudi banks have created fund transfer systems that have attracted customers accustomed to using hawala. According to State, this creates an advantage for Saudi authorities in combating terrorism financing, as senders and recipients of fund transfers through this formal financial sector are required to clearly identify themselves. Despite reporting progress in Saudi Arabia’s efforts to combat terrorism financing, U.S. officials expressed concern regarding the ability of Saudi individuals and multilateral charitable organizations, as well as other individuals visiting Saudi Arabia, to support terrorism and violent extremism outside Saudi Arabia. Officials stressed this funding allegedly comes from individuals or multilateral charitable organizations, not from the Saudi government, and that the government of Saudi Arabia is pursuing terrorism financiers and cooperating with the United States to counter terrorism financing. Further, experts we spoke with agreed that there is no indication that the Saudi government is providing funding for terrorism. Officials from State, Treasury, and DHS stated that alleged support from some Saudi individuals and multilateral charitable organizations for terrorism and violent extremism outside of Saudi Arabia remains a cause for concern. According to the 2009 INCSR, Saudi individuals and Saudi- based charitable organizations continue to be a significant source of financing for terrorism and extremism outside of Saudi Arabia. For example, Treasury officials have noted that Saudi-based individuals are a top source of funding for Al Qaeda and associated terror groups, such as the Taliban. Moreover, there have been concerns that during hajj—when an estimated 2 to 3 million Muslims visit Saudi Arabia—non-Saudi individuals associated with extremists groups could exchange funds to support terrorism and violent extremism outside of Saudi Arabia. Saudi government officials acknowledged that Saudi individuals could potentially fund terrorism, but, according to these officials, such individuals do so in violation of Saudi laws and regulations. Related to charitable contributions, Saudi officials told us that no funds have been approved for overseas transfer. Saudi officials also told us that if a Saudi- affiliated multilateral charitable organization raises funds abroad, it is the host country government’s responsibility to ensure such funds are used appropriately, and stated that they have directed their embassies abroad to cooperate with host country governments in this effort. U.S. and Saudi officials cited challenges associated with efforts to prevent financial support to extremists, including limited Saudi enforcement capacity and the use of cash couriers by terrorist financiers. First, U.S. and Saudi officials noted enforcement capacity presents a challenge to U.S.- Saudi efforts to prevent financial support to extremists. While noting progress, U.S. and Saudi officials told us that key Saudi enforcement agencies, particularly those agencies that enforce financial regulations, could benefit from increased training and technical assistance to build their capacity. For instance, U.S. and Saudi officials stated that, while the Saudi FIU has expanded its capacity since its creation in 2005, more training in financial analysis could be beneficial to the organization. Second, U.S. and Saudi officials told us that preventing financial support to extremists is made more challenging by terrorist financiers’ use of cash couriers. According to U.S. and Saudi officials, some individuals and multilateral charitable organizations have allegedly increased their use of more informal financial transaction methods, such as couriering cash across borders, in response to Saudi Arabia’s adoption of regulations making transactions in the formal financial sector more challenging. Though Saudi Arabia has implemented cash courier regulations, Saudi officials noted that motivated individuals can circumvent such regulations by moving amounts of cash that are below legal declaration limits. U.S. officials concurred, noting circumvention of regulations as an inherent challenge in regulating the use of cash couriers in any country. U.S. and Saudi officials also cited the prevalent use of cash for transactions as a challenge, particularly in Saudi Arabia. According to these officials, because large numbers of people in Saudi Arabia and the Gulf region use cash, an individual traveling with or declaring large amounts of cash may not raise suspicion. Moreover, challenges associated with preventing the circumvention of cash courier regulations increase during hajj, which, despite efforts by the Saudi government, presents logistical enforcement challenges because of the large number of people involved. Dealing with the challenges inherently associated with the use of cash, according to Saudi officials, requires using intelligence to target specific individuals who may be using cash to fund terrorism or violent extremism and educating the public not to give cash to individuals they do not know, even for allegedly worthy causes. According to State, goals and performance targets in country-specific Mission Strategic Plans (MSP) are used to assess effectiveness of U.S. policies and programs in the field and to formulate requests for resources. According to U.S. officials, since 2005, several targets to assess progress of the U.S. government’s counterterrorism and antiterrorism financing collaboration with the Kingdom of Saudi Arabia have been met. However, some performance targets, specifically those related to combating financing of terrorism, were removed from the MSP for Saudi Arabia even though U.S. agencies remain concerned about the ability of Saudi individuals and multilateral charitable organizations to fund terrorist organizations in other countries. U.S. agencies stated that, despite the omission of these targets, they continue to engage with the government of Saudi Arabia on efforts to prevent financial support to extremists. Given that State uses MSP targets to assess progress in countering terrorism and terrorism financing in Saudi Arabia, it is important that the U.S. mission include targets related to key areas of concern to more effectively prioritize U.S. efforts and to formulate resource requests for these activities. We recommend that the Secretary of State direct the U.S. mission in the Kingdom of Saudi Arabia to reinstate, in consultation with relevant U.S. agencies, performance measures related to preventing the flow of alleged financial support, through mechanisms such as cash couriers, to terrorists and extremists outside Saudi Arabia. State and DOD provided written comments on a draft of this report, which are reproduced in appendices V and VI, respectively. State concurred with our recommendation and committed to instructing the U.S. Embassy, Riyadh, in consultation with key interagency partners, to ensure that the Mission Strategic Plan reflect the continued importance of cooperation to combat terrorism financing, including by reinstating performance indicators related to Saudi enforcement of cash courier regulations. DOD also concurred with the report. In addition, we received technical comments from DOD, DOE, NSC, State, Treasury, and the intelligence community. Additionally, consistent with our protocols, we provided a copy of the draft report to Saudi officials, who described the report as a fair and detailed review of U.S. and Saudi efforts and also offered technical comments. We incorporated technical comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 5 days from the report date. At that time, we will send copies to the Departments of State, Treasury, Defense, Energy, Homeland Security, Justice, the intelligence community, and the National Security Council. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. In this report, we were asked to report on (1) the U.S. government strategy to collaborate with and assist the Kingdom of Saudi Arabia to counter terrorism and terrorism financing, and (2) U.S. government agencies’ assessment of and the Saudi government’s views on progress toward the goals of the U.S. strategy to collaborate with and assist the Kingdom of Saudi Arabia. Our work focused on the efforts of the intelligence agencies; the Departments of Defense (DOD), Energy (DOE), Homeland Security (DHS), Justice (DOJ), State (State), and the Treasury (Treasury); and the National Security Council (NSC) to collaborate with Saudi Arabia to combat terrorism and terrorism financing since 2005. Within these agencies and executive offices, we met with officials from several relevant components that are monitoring or working with the Saudi government on efforts to combat terrorism and its financing, including DHS’s Immigration and Customs Enforcement (ICE); DOJ’s Federal Bureau of Investigation (FBI); State’s bureaus of Coordinator of Counterterrorism; Economic, Energy, and Business Affairs; International Narcotics and Law Enforcement Affairs; and Near Eastern Affairs; and Treasury’s Office of Terrorism and Financial Intelligence, Financial Crimes Enforcement Network, and the Internal Revenue Service. We focused on these agencies and components as a result of previous work undertaken by GAO regarding efforts to address violent extremism, our review of information since GAO’s previous work indicating which agencies were involved in efforts to collaborate with Saudi Arabia, and discussions with U.S. agency officials regarding the agencies with which they collaborate. To examine the U.S. government strategy to collaborate with and assist the Kingdom of Saudi Arabia to counter terrorism and terrorism financing, we reviewed relevant U.S. strategic documents, including the U.S. Strategy Toward Saudi Arabia, Report Pursuant to Section 2043(c) of the Implementing the Recommendations of the 9/11 Commission Act, and Mission Strategic Plans (MSP) for Saudi Arabia (previously called Mission Performance Plans) for fiscal years 2005 to 2011. While we did not assess these documents for key elements of a strategy as identified by GAO, we used the performance targets stated in these documents as criteria against which we assessed progress. In addition to documents from U.S. agencies, we obtained documents detailing the Saudi government’s approach to combating terrorism and terrorism financing from Saudi officials, and reviewed public information available on the embassy Web site. We discussed the U.S. government strategy to collaborate with and assist Saudi Arabia with officials from DOD, DHS, DOJ, State, and Treasury, as well as representatives from the Saudi embassy, including the Saudi Ambassador to the United States. Furthermore, while in Riyadh, Saudi Arabia, we discussed the U.S. strategy to collaborate with Saudi Arabia with officials from the U.S. embassy, including the then ambassador, as well as representatives from the Saudi government, including representatives from the Ministries of Foreign Affairs, Interior, and Islamic Affairs. We met with Saudi officials from these ministries as they are involved in efforts to combat terrorism and extremism. To report U.S. government agencies’ assessment of progress toward the goals of the U.S. strategy to collaborate with and assist the Kingdom of Saudi Arabia, we reviewed relevant U.S. planning and evaluation documents, including MSPs for Saudi Arabia, Country Reports on Terrorism, and International Narcotics Control Strategy Report: Volume II, Money Laundering and Financial Crimes, among others. Additionally, we developed and administered a data collection instrument to agencies providing training and technical assistance to the Saudi government— including State, Treasury, DOJ, DHS, DOD, and DOE—to obtain information on programs and activities, including goals, description, indicators, assessments, and associated funding. We corroborated information reported to us in the data collection instrument with information obtained during interviews with agency officials and reported in agency evaluation documents. Further, we obtained and examined documentation from Saudi officials regarding their domestic efforts to counter terrorism and terrorism financing, including Rules Governing the Opening of Bank Accounts and General Operational Guidelines in Saudi Arabia, Third Update, December 2008, and reviewed public information available on the embassy Web site. Although we report on Saudi efforts to combat extremism, we did not independently verify all of these activities. In Washington, D.C., we discussed progress on U.S.-Saudi counterterrorism efforts with U.S. officials from DOD, DOE, DHS, DOJ, State, Treasury, and the intelligence community; as well as with Saudi officials from the Royal Embassy of Saudi Arabia. We also interviewed subject matter experts from a variety of academic institutions and nongovernmental organizations to obtain their assessments of progress. We selected experts who met at least one of the following criteria: (1) produced research focusing on Saudi Arabia; (2) traveled to Saudi Arabia; and, (3) were recognized as experts in the professional community, as determined by the recommendation of other experts. State concurred with our list of experts. Additionally, we traveled to Riyadh and Jeddah, Saudi Arabia, where we visited the Prince Mohammed bin Nayef Center for Advisory and Care—a counseling program designed to reeducate violent extremists—in Riyadh, and met with U.S. officials from DOD, DHS, DOJ, State, and Treasury; Saudi officials from the Saudi Arabian Monetary Agency and the Ministries of Foreign Affairs, Interior, and Islamic Affairs; and representatives from World Assembly of Muslim Youth. The information on foreign laws in this report is not a product of our original analysis, but is based on interviews and secondary sources. We conducted this performance audit from July 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Muhammad bin Saud, founder of the precursor to the modern Saudi state, strikes an alliance with the conservative Muslim leader, Muhammad bin Abdul Wahhab. This marks the start of the close association between Saudi political and religious establishment. Abdul Aziz Al Saud expands Saudi power to other regions of the Arabian Peninsula, including Riyadh and the holy Muslim city of Mecca. The modern Kingdom of Saudi Arabia is formally established under the reign of King Abdul Aziz Al Saud with a system of government based on Islamic law. United States and Saudi Arabia establish diplomatic relations. Saudi Arabia starts large scale oil production after World War II. The meeting between King Abdul Aziz and President Franklin D. Roosevelt to discuss oil and security is considered the start of more robust U.S.-Saudi relations. In 1948, Israel is established with support from the United States and overwhelming Arab opposition. U.S. and Saudi governments pursue some common national security objectives, including combating the global spread of communism. U.S. administrations consider the Saudi monarchy as an ally against nationalist and socialist governments in the Middle East. United States provides military training assistance to Saudi Arabia under the terms of the Defense Assistance Act of 1949 and the Mutual Security Act of 1951. The Muslim World League, one of several multilateral charitable organizations, is established with headquarters in Saudi Arabia to promote Islam and extend economic aid to Muslim nations. Oil production in the 1970s makes Saudi Arabian per capita income comparable to that of other developed countries. As the price of oil rises, Saudi Arabia’s wealth and political influence increase. The Islamic Republic of Iran is established after the ouster of the Shah, signaling the rise of political Islam as a challenge to existing regimes in the region. The Soviet army invades Afghanistan in December, prompting opposition from some Arab governments as well as from the United States and its allies. Religious extremists seize the Grand Mosque of Mecca; the Saudi government regains control and executes the perpetrators. Volunteers from some Muslim countries, including Saudi Arabia, join the Afghan mujahideen (the name given to Afghan resistance fighters) in their fight against the Soviets. The United States and Saudi Arabia play a leading role in supporting the mujahideen. The United States establishes military installations in Saudi Arabia during the U.S.-led and Saudi-supported ouster of the Iraqi army from Kuwait. The bases draw intense opposition from Al Qaeda, which was founded 3 years earlier. Terrorists, some with links to Al Qaeda, use a car bomb to attack the World Trade Center in New York City killing 6 people and injuring 1,042. Citing his support of extremist movements, Saudi Arabia strips Osama bin Laden of Saudi citizenship. Saudi authorities arrest hundreds of Islamists who have been openly critical of the Saudi government. Saudi terrorists use a truck bomb to attack the Riyadh headquarters of the U.S.-operated training center of the Saudi National Guard resulting in the death of 5 Americans, among others. Terrorist bombings at U.S. military complex in Khobar, eastern Saudi Arabia, kill 19 Americans and wound many others. The attack is conducted by Saudi Hizaballah. The Taliban rise to power in Afghanistan in 1996, imposing a strict interpretation of Islamic law on Afghan society. In 1997, Saudi Arabia becomes one of three countries to recognize the Taliban regime. The United Nations Security Council adopts Resolution 1267 imposing financial and other sanctions applying to individuals and entities associated with Al Qaeda, Osama bin Laden, and/or the Taliban. Dissatisfaction in Saudi Arabia and many parts of the Arab world with U.S. policies related to the second Palestinian intifada (or uprising). Frustrated by a lack of U.S. response to Israeli- Palestinian violence, in a letter to President Bush in 2001, Saudi Crown Prince Abdullah warns of a serious rift between the two governments. Al Qaeda attack on the USS Cole harbored in Yemen kills 17 American soldiers. On September 11th, the World Trade Center in New York and the Pentagon in Washington, D.C. are the targets of terrorist attacks, and a hijacked plane crashes in Pennsylvania. The attacks are carried out by 19 Al Qaeda hijackers, including 15 Saudi nationals. Saudi Arabia condemns the attack and withdraws its official recognition of the Taliban government in Afghanistan, which is an ally of Al Qaeda. U.S. forces, along with those of several allies and Afghanistan’s Northern Alliance, topple the Taliban regime and remove the safe haven for Al Qaeda in Afghanistan. Some Al Qaeda survivors return to their countries of origin, including Saudi Arabia. The United States and its coalition of allies begin military operations in Iraq in March. Saudi Arabia publicly opposes the U.S.-led invasion of Iraq, but reportedly allows the U.S.-led forces the use of military facilities in the Kingdom in support of operations in Iraq. However, virtually all U.S. troops are withdrawn from the Kingdom by August. Several terrorist incidents in Saudi Arabia, including attacks in May and November on residential compounds, kill dozens of people. Iraqi insurgency attracts foreign fighters, including Saudi nationals. A series of terrorist attacks throughout Saudi Arabia results in dozens of victims. The targets include Saudi Arabia’s oil infrastructure and the U.S. Consulate in Jeddah. Saudi security forces intensify their antiterrorist campaign, with casualties on both sides, including Abdulaziz al-Mughrin, the head of Al Qaeda in Saudi Arabia. Primary ministry responsible for efforts to combat terrorism and terrorism financing. It has issued “most wanted” list of terrorists and its security forces have killed or arrested terrorist suspects. The ministry also includes investigative units and oversees the Saudi rehabilitation program for terrorism suspects. The ministry oversees public security, coast guards, civil defense, fire stations, border police, and special security and investigative functions, including criminal investigation. This unit, which is within in the Ministry of Interior, was created in 2005 and tasked with handling money laundering and terror finance cases. All banks are required to file suspicious transaction reports with this unit. It collects and analyzes these reports and other available information and makes referrals to relevant Saudi agencies, including the Mabahith for further investigation and prosecution. Also referred to as the General Investigation Directorate, it investigates cases related to terrorism, among other activities. Serves as the central bank of the Kingdom of Saudi Arabia. It plays a central role in overseeing all anti-money laundering and combating financing of terrorism programs and supervises all banking, securities, and insurance activities in the country. Oversees political, cultural, and financial international relations; monitors diplomatic relations. Signs bilateral agreements, including those related to counterterrorism, and coordinates efforts with other governments. Oversees government finance, including budgeting and expenditure of all ministries and agencies; controls national economic growth, zakat, income tax, and customs. Oversees all Islamic affairs, including maintenance of mosques and monitoring of clerics. The ministry is reported to have dismissed a large number of extremist clerics and sent them to be reeducated. Oversees all schools, including physical infrastructure and curriculum. One of its priorities is to increase awareness and religious tolerance among teachers. Appendix IV: U.S. Agencies Providing Training and Technical Assistance to Saudi Arabia, 2005-2008 Along with State and DOE, assists the Saudi government with the protection of critical infrastructure assets, such as oil installations, through the Office of Program Management-Ministry of Interior (OPM-MOI)—a U.S.-Saudi joint organization. Along with DOD and State, assists the Saudi government with the protection of critical infrastructure assets, such as oil installations, through OPM-MOI. DOE has provided training to Saudi officials on conducting facilities assessments, as well as performed joint assessments of key facilities. Provided training to Saudi Customs officials on interdiction and investigation related to bulk cash smuggling. Provided training to Saudi officials, through the Federal Bureau of Investigation, on financial investigation techniques. Leads the U.S. Mission in Saudi Arabia. Provided training to Saudi government officials related to building investigative capability and consulted on oil installation security. Along with DOD and DOE, assists the Saudi government with the protection of critical infrastructure assets, such as oil installations, through OPM-MOI. Through the Financial Crimes Enforcement Network, provided training to, and conducted assessments of, the Saudi Arabian Financial Investigation Unit. Through Internal Revenue Service-Criminal Investigations, provided training to Saudi officials on financial investigation techniques. Engages in information sharing with Saudi ministries. In addition to the person named above, Jason Bair (Assistant Director), Ashley Alley, Alexandra Bloom, Joe Carney, Tom Costa, Martin de Alteriis, Mark Dowling, Jonathan Fremont, Etana Finkler, Phillip Farah, Joel Grossman, Julia Jebo, Charles M. Johnson, Jr., Bruce Kutnick, Elizabeth Repko, Mona Sehgal, and James Strus made key contributions to this report. Elizabeth Curda, Davi D’Agostino, Muriel Forster, Barbara Keller, Eileen Larence, Armetha Liles, Sarah McGrath, and Theresa Perkins also provided technical assistance. Combating Terrorism: Actions Needed to Enhance Implementation of Trans-Sahara Counterterrorism Partnership. GAO-08-860. Washington, D.C.: July 31, 2008. Combating Terrorism: The United States Lacks Comprehensive Plan to Destroy the Terrorist Threat and Close the Safe Haven in Pakistan’s Federally Administered Tribal Areas. GAO-08-622. Washington, D.C.: April 17, 2008. Combating Terrorism: State Department’s Antiterrorism Program Needs Improved Guidance and More Systematic Assessments of Outcomes. GAO-08-336. Washington, D.C.: February 29, 2008. Combating Terrorism: Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists. GAO-07-697. Washington, D.C.: May 25, 2007. Terrorist Financing: Agencies Can Improve Efforts to Deliver Counter- Terrorism-Financing Training and Technical Assistance Abroad. GAO-06-632T. Washington, D.C.: April 6, 2006. Terrorist Financing: Better Strategic Planning Needed to Coordinate U.S Efforts to Deliver Counter-Terrorism Financing Training and Technical Assistance Abroad. GAO-06-19. Washington, D.C.: October 24, 2005. International Affairs: Information on U.S. Agencies’ Efforts to Address Islamic Extremism. GAO-05-852. Washington, D.C.: September 16, 2005. Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing. GAO-05-859. Washington, D.C.: September 13, 2005. Border Security: Actions Needed to Strengthen Management of Department of Homeland Security’s Visa Security Program. GAO-05-801. Washington, D.C.: July 29, 2005. Terrorist Financing: U.S. Agencies Should Systematically Assess Terrorists’ Use of Alternative Financing Mechanisms. GAO-04-163. Washington, D.C.: November 14, 2003. Combating Terrorism: Interagency Framework and Agency Programs to Address the Overseas Threat. GAO-03-165. Washington, D.C.: May 23, 2003. | An Arabic version of this product is available at http://www.gao.gov/products/GAO-11-190 . The U.S. government considers the Kingdom of Saudi Arabia a vital partner in combating terrorism. The strong diplomatic relationship between the United States and Saudi Arabia, founded more than 70 years ago, was strained by the Al Qaeda attacks of September 11, 2001, that were carried out in large part by Saudi nationals and killed thousands of U.S. citizens. GAO was asked to report on (1) the U.S. government strategy to collaborate with and assist the Kingdom of Saudi Arabia to counter terrorism and terrorism financing, and (2) U.S. government agencies' assessment of and the Saudi government's views on progress toward the goals of this strategy. GAO analyzed relevant U.S. and Saudi strategy, planning, and evaluation documents related to efforts since 2005, and discussed these efforts with subject matter experts and U.S. and Saudi officials in Washington, D.C., and Riyadh and Jeddah, Saudi Arabia. GAO submitted a copy of this report to intelligence agencies, the National Security Council, and the Departments of Defense, Energy, Homeland Security, Justice, State, and Treasury for their review and comment. The U.S. government strategy to collaborate with Saudi Arabia on counterterrorism utilizes existing diplomatic and security-related efforts to create an active antiterrorism coalition by enhancing the Saudi government's ability to combat terrorists and prevent financial support to extremists. These objectives are contained in Department of State's (State) Mission Strategic Plans (MSP) for Saudi Arabia for fiscal years 2006 through 2009, and also reflected in a January 2008 report from State to the Congress on its strategy for Saudi Arabia. The MSPs include performance targets to measure progress on efforts to combat terrorism and its financing, such as providing security training to the Saudi government, strengthening Saudi financial institutions, and implementation of relevant Saudi regulations. U.S. and Saudi officials report progress on countering terrorism and its financing within Saudi Arabia, but noted challenges, particularly in preventing alleged funding for terrorism and violent extremism outside of Saudi Arabia. In April 2009, State assessed progress related to its goal of building an active U.S.-Saudi antiterrorist coalition as "on target." U.S. and Saudi officials report progress in enhancing the Saudi government's ability to combat terrorists, and note the Saudi government's efforts have disrupted Al Qaeda's terrorist network within Saudi Arabia. However, these officials noted Saudi Arabia's neighbor, Yemen, is emerging as a base from which Al Qaeda terrorists can launch attacks against U.S. and Saudi interests. U.S. and Saudi officials also report progress on efforts to prevent financial support to extremists, citing, for example, the Saudi government's regulations on sending charitable contributions overseas, and the arrest and prosecution of individuals providing support for terrorism. However, U.S. officials remain concerned about the ability of Saudi individuals and charitable organizations to support terrorism outside of Saudi Arabia, and noted limited Saudi enforcement capacity and terrorists' use of cash couriers as challenges. Despite these concerns, some performance targets related to countering terrorism financing were removed from State's current MSP. According to State officials, these changes were made either because a specific target was no longer considered feasible or because progress was made toward the target. |
Over their lifetimes, men and women differ in many ways that have consequences for how much they will receive from Social Security and pensions. Women make up about 60 percent of the elderly population and less than half of the Social Security beneficiaries who are receiving retired worker benefits, but they account for 99 percent of those beneficiaries who receive spouse or survivor benefits. A little less than half of working women between the ages of 18 and 64 are covered by a pension plan, while slightly over half of working men are covered. The differences between men and women in pension coverage are magnified for those workers nearing retirement age—over 70 percent of men are covered compared with about 60 percent of women. Labor force participation rates differ for men and women, with men being more likely, at any point in time, to be employed or actively seeking employment than women. The gap in labor force participation rates, however, has been narrowing over time as more women enter the labor force, and the Bureau of Labor Statistics predicts it will narrow further. In 1948, for example, women’s labor force participation rate was about a third of that for men, but by 1996, it was almost four-fifths of that for men. The labor force participation rate for the cohort of women currently nearing retirement age (55 to 64 years of age) was 41 percent in 1967 when they were 25 to 34 years of age. The labor force participation rate for women who are 25 to 34 years of age today is 75 percent—an increase of over 30 percentage points. Earnings histories also affect retirement income, and women continue to earn lower wages than men. Some of this difference is due to differences in the number of hours worked, since women are more likely to work part-time and part-time workers earn lower wages. However, median earnings of women working year-round and full-time are still only about 70 percent of men’s. The lower labor force participation of women leads to fewer years with covered earnings on which Social Security benefits are based. In 1993, the median number of years with covered earnings for men reaching 62 was 36 but was only 25 for women. Almost 60 percent of men had 35 years with covered earnings, compared with less than 20 percent of women. Lower annual earnings and fewer years with covered earnings lead to women’s receiving lower monthly retired worker benefits from Social Security, since many years with low or zero earnings are used in the calculation of Social Security benefits. On average, the retired worker benefits received by women are about 75 percent of those received by men. In many cases, a woman’s retired worker benefits are lower than the benefits she is eligible to receive as the spouse or survivor of a retired worker. Women tend to live longer than men and thus may spend many of their later retirement years alone. A woman who is 65 years old can expect to live an additional 19 years (to 84 years of age), and a man of 65 can expect to live an additional 15 years (to 80 years of age). By 2070, the Social Security Administration projects that a 65-year-old woman will be able to expect to live another 22 years, and a 65-year-old-man, another 18 years. Additionally, husbands tend to be older than their wives and so are likely to die sooner. Differences in longevity do not currently affect the receipt of monthly Social Security benefits but can affect income from pensions if annuities are purchased individually. women. The authors estimated that, after 35 years of participation in the plan at historical yields and identical contributions, the difference in investment behavior between men and women can lead to men having a pension portfolio that is 16 percent larger. Social Security provisions and pension plan provisions differ in several ways (see app. I for a summary). Under Social Security, the basic benefit a worker receives who retires at the normal retirement age (NRA) is based on the 35 years with the highest covered earnings. The formula is progressive in that it guarantees that higher-income workers receive higher benefits, while the benefits of lower-income workers are a higher percentage of their preretirement earnings. The benefit is guaranteed for the life of the retired worker and increases annually with the cost of living. may elect, along with the spouse, to take a single life annuity or a lump-sum distribution if allowed under the plan. When workers retire, they are uncertain how long they will live and how quickly the purchasing power of a fixed payment will deteriorate. They run the risk of outliving their assets. Annuities provide insurance against outliving assets. Some annuities provide, though at a higher cost or reduced initial benefit, insurance against inflation risk, although annuity benefits often do not keep pace with inflation. Many pension plans are managed under a group annuity contract with an insurance company that can provide lifetime benefits. Individual annuities, however, tend to be costly. Under Social Security, the dependents of a retired worker may be eligible to receive benefits. For example, the spouse of a retired worker is eligible to receive up to 50 percent of the worker’s basic benefit amount, while a dependent surviving spouse is eligible to receive up to 100 percent of the deceased worker’s basic benefit. Furthermore, divorced spouses and survivors are eligible to receive benefits under a retired worker’s Social Security record provided they were married for at least 10 years. If the retired worker has a child under 18 years old, the child is eligible for Social Security benefits, as is the dependent nonelderly parent of the child. The retired worker’s Social Security benefit is not reduced to provide benefits to dependents and former spouses. Pensions, both public and private, generally do not offer the same protections to dependents as Social Security. Private and public pension benefits are based on a worker’s employment experience and not the size of the worker’s family. At retirement, a worker and spouse normally receive a joint and survivor annuity so that the surviving spouse will continue to receive a pension benefit after the retired worker’s death. A worker, with the written consent of the spouse, can elect to take retirement benefits in the form of a single life annuity so that benefits are guaranteed only for the lifetime of the retired worker. payment options. Under this act, a joint and survivor annuity became the normal payout option and written spousal consent is required to choose another option. This requirement was prompted partly by testimony before the Congress by widows who stated that they were financially unprepared at their husbands’ death because they were unaware of their husbands’ choice to not take a joint and survivor annuity. Through the spousal consent requirement, the Congress envisioned that, among other things, a greater percentage of married men would retain the joint and survivor annuity and give their spouses the opportunity to receive survivor benefits. The monthly benefits under a joint and survivor annuity, however, are lower than under a single life annuity. Moreover, pension plans do not generally contain provisions to increase benefits to the retired worker for a dependent spouse or for children. As under Social Security, divorced spouses can also receive part of the retired worker’s pension benefit if a qualified domestic relations order is in place. However, the retired worker’s pension benefit is reduced in order to pay the former spouse. The three alternative proposals of the Social Security Advisory Council would make changes of varying degrees to the structure of Social Security. The key features of the proposals are summarized in appendix II. The Maintain Benefits (MB) plan would make only minor changes to the structure of current Social Security benefits. The major change that would affect women’s benefits is the extension of the computation period for benefits from 35 years to 38 years of covered earnings. Currently, earnings are averaged over the 35 years with the highest earnings to compute a worker’s Social Security benefits. If the worker has worked less than 35 years, then some of the years of earnings used in the calculation are equal to zero. Extending the computation period for the lifetime average earnings to 38 years would have a greater impact on women than on men. Although women’s labor force participation is increasing, the Social Security Administration forecasts that fewer than 30 percent of the women retiring in 2020 will have 38 years of covered earnings, compared with almost 60 percent of men. The Individual Accounts (IA) plan would keep many features of the current Social Security system but add an individual account modeled after the 401(k) pension plan. Workers would be required to contribute an additional 1.6 percent of taxable earnings to their individual account, which would be held by the government. Workers would direct the investment of their account balances among a limited number of investment options. At retirement, the distribution from this individual account would be converted by the government into an indexed annuity. The IA plan, like the MB plan, would extend the computation period to 38 years; it would also change the basic benefit formula by lowering the conversion factors at the higher earnings level. This plan would also accelerate the legislated increase in the normal retirement age and then index it to future increases in longevity. As a consequence of these changes, basic Social Security benefits would be lower for all workers, but workers would also receive a monthly payment from the annuitized distribution from their individual account, which proponents claim would offset the reduction in the basic benefit. In addition to extending the computation period, elements of the IA plan that would disproportionately affect women are the changes in benefits received by spouses and survivors, since women are much more likely to receive spouse and survivor benefits. The spouse benefit would be reduced from 50 percent of the retired worker’s basic benefit amount to 33 percent. The survivor benefit would increase from 100 percent of the deceased worker’s basic benefit to 75 percent of the couple’s combined benefit if the latter was higher. These changes would probably result in increased lifetime benefits for many women. Additionally, at retirement a worker and spouse would receive a joint and survivor annuity for the distribution of their individual account unless the couple decided on a single life annuity. Security payroll tax into the account, which would not be held by the government. Proponents of the PSA plan claim that over a worker’s lifetime the tier I benefits plus the tier II distribution would be larger than the lifetime Social Security benefits currently received by retired workers. The worker would direct the investment of his or her account assets. At retirement, workers would not be required to annuitize the distribution from their personal security account but could elect to receive a lump-sum payment. This could potentially affect women disproportionately, since the worker is not required to consult with his or her spouse regarding the disposition of the personal account distribution. Under the PSA plan, the tier I benefit for spouses would be equal to the higher of their own tier I benefit or 50 percent of the full tier I benefit. Furthermore, spouses would receive their own tier II accumulations, if any. The tier I benefit for a survivor would be 75 percent of the benefit payable to the couple; in addition, the survivor could inherit the balance of the deceased spouse’s personal security account assets. Many of the proposed changes to Social Security would affect the benefits received by men and by women differently. The current Social Security system is comparable to a defined benefit plan’s paying a guaranteed lifetime benefit that is increased with the cost of living. Each of the Advisory Council proposals would potentially change the level of that benefit, and two of the proposals would create an additional defined contribution component. Not only would retired worker benefits be changed by these proposals, but the level of benefits for spouses and survivors would be affected. the account balances at retirement would depend on the contributions made to the worker’s account and investment returns or losses on the account assets. Since women tend to earn lower wages, they would be contributing less, on average, than men to their accounts. Furthermore, even if contributions were equal, women tend to be more conservative investors than men, which could lead to lower investment returns. Consequently, women would typically have smaller account balances at retirement and would receive lower benefits than men. The difference in investment strategy could lead to a situation in which men and women with exactly the same labor market experiences receive substantially different Social Security benefits. The extent to which investor education can close the gap in investment behavior between men and women is unknown. The two Advisory Council proposals with individual or personal accounts differ in the handling of the distribution of the account balances at retirement. The IA plan would require annuitization of the distribution at retirement, and choosing a single life annuity or a joint and survivor annuity would be left to the worker and spouse. If the single life annuity option for individual account balances was chosen, then the spouse would receive the survivor’s basic benefit after the death of the retired worker plus the annuitized benefit based on the work records of both individuals. The PSA plan would not require that the private account distribution be annuitized at retirement. A worker and spouse could take the distribution as a lump sum and attempt to manage their funds so that they did not outlive their assets. If the assets were exhausted, the couple would have only their basic tier I benefits, plus any other savings and pension benefits. Furthermore, even if personal account tier II assets were left after the death of the retired worker, the balance of the PSA account would not necessarily have to be left to the survivor. If a worker and spouse chose to purchase an annuity at retirement, then the couple would receive a lower monthly benefit than would be available from a group annuity. although the expected lifetime payments would be the same, the monthly payments to the woman would be lower, since women have longer life expectancies. Even though the current provisions of Social Security are gender neutral, differences during the working and retirement years may lead to different benefits for men and women. For example, differences in labor force attachment, earnings, and longevity lead to women’s being more likely than men to receive spouse or survivor benefits. Women who do receive retired worker benefits typically receive lower benefits than men. As a result of lower Social Security benefits and the lower likelihood of receiving pension benefits, among other causes, elderly single women experience much higher poverty rates than elderly married couples and elderly single men. Social Security is a large and complex program that protects most workers and their families from income loss because of a worker’s retirement. Public and private pension plans do not offer the social insurance protections that Social Security does. Pension benefits are neither increased for dependents nor generally indexed to the cost of living as are Social Security benefits. Typically, at retirement a couple will receive a joint and survivor annuity that initially pays monthly benefits that are 15 to 20 percent lower than if they had chosen to forgo the survivor benefits with a single life annuity. Furthermore, under a qualified domestic relations order, a divorced retired worker’s pension benefits may be reduced to pay benefits to a former spouse. While the three alternative proposals of the Social Security Advisory Council are intended to address the long-term financing problem, they would make changes that could affect the relative level of benefits received by men and women. Each of the proposals has the potential to exacerbate the current differences in benefits between men and women. Narrowing the gap in labor force attachment, earnings, and investment behavior may reduce the differences in benefits. But as long as these differences remain, men and women will continue to experience different outcomes with regard to Social Security benefits. This concludes my prepared statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. For more information on this testimony, please call Jane Ross on (202) 512-7230; Frank Mulvey, Assistant Director, on (202) 512-3592; or Thomas Hungerford, Senior Economist, on (202) 512-7028. | GAO discussed the impacts of proposals to finance and restructure the Social Security system, specifically the impacts on the financial well-being of women. GAO noted that: (1) its work shows that, despite the provisions of the Social Security Act that do not differentiate between men and women, women tend to receive lower benefits than men; (2) this is due primarily to differences in lifetime earnings because women tend to have lower wages and fewer years in the workforce; (3) women's experience under pension plans also differs from men's not only because of earnings differences but also because of differences in investment behavior and longevity; (4) moreover, public and private pension plans do not offer the same social insurance protections that Social Security does; (5) furthermore, some of the provisions of the Social Security Advisory Council's three proposals may exacerbate the differences in men and women's benefits; (6) for example, proposals that call for individual retirement accounts will pay benefits that are affected by investment behavior and longevity; and (7) expected changes in women's labor force participation rates and increasing earnings will reduce but probably not eliminate these differences. |
The 340B Program was created following the enactment of the Medicaid Drug Rebate Program and gives 340B covered entities discounts on outpatient drugs comparable to those made available to state Medicaid agencies. HRSA is responsible for administering and overseeing the 340B Program. Eligibility for the 340B Program, which is defined in the PHSA, has expanded over time, most recently through the Patient Protection and Affordable Care Act, which extended eligibility to additional types of hospitals. Entities generally become eligible by receiving certain federal grants or by being one of six hospital types. Eligible grantees include clinics that offer primary and preventive care services, such as Federally Qualified Health Centers, clinics that target specific conditions or diseases that raise public health concerns or are expensive to treat, and state-operated AIDS Drug Assistance Programs, which serve as a “payer of last resort” to cover the cost of providing HIV-related medications to certain low-income individuals. Eligible hospitals include certain children’s hospitals, free-standing cancer hospitals, rural referral centers, sole community hospitals, critical access hospitals, and general acute care hospitals that serve a disproportionate number of low-income patients, referred to as disproportionate share hospitals (DSH). To become a covered entity and participate in the program, eligible entities must register with HRSA and be approved. Entity participation in the 340B program has grown over time to include more than 38,000 entity sites, including more than 21,000 hospital sites and nearly 17,000 federal grantee sites (see fig. 1). To be eligible for the 340B Program hospitals must meet certain requirements intended to ensure that they perform a government function to provide care to the medically underserved. First, hospitals generally must meet specified DSH adjustment percentages to qualify. Additionally, they must be (1) owned or operated by a state or local government, (2) a public or private nonprofit corporation that is formally delegated governmental powers by a unit of state or local government, or (3) a private, nonprofit hospital under contract with a state or local government to provide health care services to low-income individuals who are not eligible for Medicaid or Medicare. All drug manufacturers that supply outpatient drugs are eligible to participate in the 340B Program and must participate in order to have their drugs covered by Medicaid. To participate, manufacturers are required to sign a pharmaceutical pricing agreement with HHS in which both parties agree to certain terms and conditions. The 340B price for a drug—often referred to as the 340B ceiling price—is based on a statutory formula and represents the highest price a participating drug manufacturer may charge covered entities. Covered entities must follow certain requirements as a condition of participating in the 340B Program. For example covered entities are prohibited from subjecting manufacturers to “duplicate discounts” in which drugs prescribed to Medicaid beneficiaries are subject to both the 340B price and a rebate through the Medicaid Drug Rebate Program. covered entities are also prohibited from diverting any drug purchased at the 340B price to an individual who does not meet HRSA’s definition of a patient. This definition, issued in 1996, outlines three criteria that generally state that diversion occurs when 340B discounted drugs are given to individuals who are not receiving health care services from covered entities or are only receiving non-covered services, such as inpatient hospital services. (See table 1 for more information on HRSA’s definition of an eligible patient.) Covered entities are permitted to use drugs purchased at the 340B price for all individuals who meet the 340B Program definition of a patient regardless of whether they are low-income, uninsured, or underinsured. A covered entity typically purchases and dispenses 340B drugs through pharmacies—either through an in-house pharmacy, or through the use of a contract pharmacy arrangement, in which the covered entity contracts with an outside pharmacy to dispense drugs on its behalf. The adoption and use of contract pharmacies is governed by HRSA guidance. HRSA’s original guidance permitting the use of contract pharmacies limited their use to covered entities that did not have in-house pharmacies and allowed each covered entity to contract with only one outside pharmacy. However, March 2010 guidance lifted the restriction on the number of pharmacies with which a covered entity could contract. Since that time, the number of unique contract pharmacies has increased significantly, from about 1,300 at the beginning of 2010 to around 18,700 in 2017 (see fig. 2); and, according to HRSA data, in 2017, there were more than 46,000 contract pharmacy arrangements. HRSA guidance requires a written contract between the covered entity and each contract pharmacy. Covered entities are responsible for overseeing contract pharmacies to ensure compliance with prohibitions of drug diversion and duplicate discounts. HRSA guidance indicates that covered entities are “expected” to conduct annual independent audits of contract pharmacies, leaving the exact method of ensuring compliance up to the covered entity. Drug manufacturers also must follow certain 340B Program requirements. For example, HRSA’s nondiscrimination guidance prohibits manufacturers from distributing drugs in ways that discriminate against covered entities compared to other providers. This includes ensuring that drugs are made available to covered entities through the same channels that they are made available to non-340B providers, and not conditioning the sale of drugs to covered entities on restrictive conditions, which would have the effect of discouraging participation in the program. In our September 2011 report, we found that HRSA’s oversight of the 340B Program was weak because it primarily relied on covered entities and manufacturers to police themselves and ensure their own compliance with program requirements. Upon enrollment into the program, HRSA requires participants to self-certify that they will comply with applicable 340B Program requirements and any accompanying agency guidance, and expects participants to develop the procedures necessary to ensure and document compliance, informing HRSA if violations occur. HRSA officials told us that covered entities and manufacturers could also monitor each other’s compliance with program requirements, but we found that, in practice, participants could face limitations to such an approach. Beyond relying on participants’ self-policing, we also found that HRSA engaged in few activities to oversee the 340B Program and ensure its integrity, which agency officials said was primarily due to funding constraints. Further, although HRSA had the authority to conduct audits of program participants to determine whether program violations had occurred, at the time of our 2011 report, the agency had never conducted such an audit. In our 2011 report, we concluded that changes in the settings where the 340B Program was used may have heightened the concerns about the inadequate oversight we identified. In the years leading up to our report, the settings where the 340B Program was used had shifted to more contract pharmacies and hospitals than in the past, and that trend has continued in recent years. We concluded that increased use of the 340B Program by contract pharmacies and hospitals may have resulted in a greater risk of drug diversion to ineligible patients, in part because these facilities were more likely to serve patients that did not meet the definition of a patient of the program. To address these oversight weaknesses, we recommended that the Secretary of HHS instruct the administrator of HRSA to conduct selective audits of covered entities to deter potential diversion. In response to that recommendation, in fiscal year (FY) 2012, HRSA implemented a systematic approach to conducting annual audits of covered entities that is outlined on its website. Now numbering 200 per year, HRSA audits include entities that are randomly selected based on risk-based criteria (approximately 90 percent of the audits conducted each year), and entities that are targeted based on information from stakeholders (10 percent of the audits conducted). (See table 2 for the number of audits conducted by HRSA from FY 2012-2017.) As a result of the audits already conducted, HRSA has identified instances of non-compliance with program requirements, including violations related to drug diversion and the potential for duplicate discounts. The agency has developed a process to address non- compliance through corrective action plans. The results of each year’s audits are available on HRSA’s website. In our 2011 report, we found that HRSA’s guidance on three key program requirements lacked the necessary level of specificity to provide clear direction, making it difficult for participants to self-police or monitor others’ compliance, and raising concerns that the guidance could be interpreted in ways that were inconsistent with its intent. First, we found that HRSA’s nondiscrimination guidance was not sufficiently specific in detailing practices manufacturers should follow to ensure that drugs were equitably distributed to covered entities and non- 340B providers when distribution was restricted. Some stakeholders we interviewed for the 2011 report, such as covered entities, raised concerns about the way certain manufacturers interpreted and complied with the guidance in these cases. We recommended that HRSA further clarify its nondiscrimination guidance for cases in which distribution of drugs is restricted and require reviews of manufacturers’ plans to restrict distribution of drugs at 340B prices in such cases. In response, HRSA issued a program notice in May 2012 that clarified HRSA’s policy for manufacturers that intend to restrict distribution of a drug and provided additional detail on the type of information manufacturers should include in such restricted distribution plans. In addition, we found a lack of specificity in HRSA’s guidance on two other issues—the definition of an eligible patient and hospital eligibility for program participation. Specifically, we found that HRSA’s guidance on the definition of an eligible patient lacked the necessary specificity to clearly define the various situations under which an individual was considered eligible for discounted drugs through the 340B Program. As a result, covered entities could interpret the definition either too broadly or too narrowly. At the time of our report, agency officials told us they recognized the need to provide additional clarity around the definition of an eligible patient, in part because of concerns that some covered entities may have interpreted the definition too broadly to include non-eligible individuals, such as those seen by providers who were only loosely affiliated with a covered entity. HRSA had not issued guidance specifying the criteria under which hospitals that were not publicly owned or operated could qualify for the 340B Program. For example, we found HRSA guidance lacking on one of the ways hospitals could qualify for the program, namely by executing a contract with a state or local government to provide services to low-income individuals who are not eligible for Medicaid or Medicare. Specifically, we found that HRSA did not outline any criteria that must be included in such contracts, such as the amount of care a hospital must provide to these low-income individuals, and did not require the hospitals to submit their contracts for review by HRSA. As a result, hospitals with contracts that provided a small amount of care to low-income individuals not eligible for Medicaid or Medicare could claim 340B discounts, which may not have been what the agency intended. Given the lack of specificity in these areas, we recommended that HRSA (1) finalize new, more specific guidance on the definition of an eligible patient, and (2) issue guidance to further specify the criteria that hospitals not publicly owned or operated must meet to be eligible for the 340B program. HRSA agreed with these recommendations and had planned to address them in a comprehensive 340B Program regulation that it submitted to the Office of Management and Budget for review in April 2014. However, HRSA withdrew this proposed regulation in November 2014 following a May 2014 federal district court ruling that the agency had not been granted broad rulemaking authority to carry out all the provisions of the 340B program. After this ruling, the agency issued a proposed omnibus guidance in August 2015 to interpret statutory requirements for the 340B program in areas where it did not have explicit rulemaking authority, including further specificity on the definition of a patient of a covered entity and hospital eligibility for 340B program participation. However, in January 2017, the agency withdrew the guidance following the administration’s January 20 memorandum directing agencies to withdraw or postpone regulations and guidance that had not yet taken effect. In July 2017, HRSA indicated that it was working with HHS to determine next steps regarding the proposed Omnibus Guidance, which included the patient definition, but that it was unable to further clarify guidance on hospital eligibility without additional authority. Given the increase in the number of contract pharmacies in the 340B Program and concerns that contract pharmacy arrangements present an increased risk to the integrity of the program, we were asked to review contract pharmacy use under the 340B Program. For this review, we are planning to address the following four questions. To what extent do the various types of covered entities use contract pharmacies and where are the pharmacies located? What, if any, financial arrangements do covered entities have with contract pharmacies and third-party administrators related to the administration and dispensing of 340B drugs, and how, if at all, this varies by entity type? To what extent do covered entities provide low-income, uninsured patients with discounts on drugs dispensed by contract pharmacies? How, if at all, do covered entities and HRSA ensure compliance with 340B program requirements at contract pharmacies? We are in the early stages of this work, and we expect to issue a future report on 340B contract pharmacies. Chairman Murphy, Ranking Member DeGette, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Michelle Rosenberg, Assistant Director; Rotimi Adebonojo, Jennie Apter; and Amanda Cherrin. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | According to HRSA, the purpose of the 340B Program, which was created in 1992, is to enable covered entities to stretch scarce federal resources to reach more eligible patients, and provide more comprehensive services. Covered entities can provide 340B drugs to patients regardless of income or insurance status and generate revenue by receiving reimbursement from patients insurance. The program does not specify how this revenue is to be used or whether discounts are to be passed on to patients. The number of participating covered entity sites—currently about 38,000—has almost doubled in the past 5 years and the number of contract pharmacies increased from about 1,300 in 2010 to around 18,700 in 2017. In recent years, questions have been raised regarding oversight of the 340B Program, particularly given the program's growth over time. In September 2011, GAO identified inadequacies in HRSA's oversight of the 340B program and made recommendations for improvement. This statement describes (1) HRSA actions in response to GAO recommendations to improve its program oversight, and (2) ongoing GAO work regarding the 340B program and HRSA oversight. For this statement, GAO obtained information and documentation from HRSA officials about any significant program updates and steps they have taken to implement the 2011 GAO recommendations. More detailed information on the objectives, scope, and methodology can be found in GAO's September 2011 report. The 340B Drug Pricing Program requires drug manufacturers to sell outpatient drugs at discounted prices to covered entities—eligible clinics, hospitals, and others—to have their drugs covered by Medicaid. Covered entities are only allowed to provide 340B drugs to certain eligible patients. Entities dispense 340B drugs through in-house pharmacies or contract pharmacies, which are outside pharmacies entities contract with to dispense drugs on their behalf. The number of contract pharmacies has increased significantly in recent years. In its September 2011 report, GAO found that the Health Resources and Services Administration's (HRSA) oversight of the 340B program was inadequate to ensure compliance with program rules, and GAO recommended actions that HRSA should take to improve program integrity, particularly given significant growth in the program in recent years. HRSA has taken steps to address two of GAO's four recommendations: HRSA initiated audits of covered entities . GAO found that HRSA's oversight of the 340B Program was weak because it primarily relied on covered entities and manufacturers to ensure their own compliance with program requirements and HRSA engaged in few oversight activities. GAO recommended that HRSA conduct audits of covered entities and in fiscal year 2012, HRSA implemented a systematic approach to conducting annual audits of covered entities. HRSA now conducts 200 audits a year, which have identified instances of non-compliance with program requirements, including the dispensing of drugs to ineligible patients. HRSA clarified guidance for manufacturers. GAO found a lack of specificity in guidance for manufacturers for handling cases in which distribution of drugs is restricted, such as when there is a shortage in drug supply. GAO recommended that HRSA refine its guidance. In May 2012, HRSA clarified its policy for when manufacturers restricted distribution of a drug and provided additional detail on the type of information manufacturers should include in their restricted distribution plans. HRSA has not clarified guidance on two issues. GAO also found that HRSA guidance on (1) the definition of an eligible patient and (2) hospital eligibility criteria for program participation lacked specificity and recommended that HRSA clarify its guidance. HRSA agreed that clearer guidance was necessary and, in 2015, released proposed guidance that addressed both issues. However, earlier this year, the agency withdrew that guidance in accordance with recent directives to freeze, withdraw, or postpone pending federal guidance. Given particular concerns that the significant escalation in the number of contract pharmacies poses a potential risk to the integrity of the 340B Program, GAO was asked to examine this issue and expects to issue a future report, in which it plans to address the extent to which covered entities use contract pharmacies; financial arrangements between covered entities and pharmacies; the provision of discounts on drugs dispensed by contract pharmacies to low-income, uninsured patients; and how covered entities and HRSA ensure compliance with 340B program requirements at contract pharmacies. |
Before discussing the specifics of H.R. 4246, I would like to provide an overview of the risks of severe disruption facing our nation’s critical infrastructure and the steps being taking to address these risks. In particular, the explosive growth in computer interconnectivity over the past 10 years has significantly increased the risk that vulnerabilities exploited within one system will affect other connected systems. Massive computer networks now provide pathways among systems that if not properly secured, can be used to gain unauthorized access to data and operations from remote locations. While the threats or sources of these problems can include natural disasters, such as earthquakes, and system- induced problems, such as the Year 2000 (Y2K) date conversion problem, government officials are increasingly concerned about attacks from individuals and groups with malicious intentions, such as terrorists and nations engaging in information warfare. The resulting damage can vary, depending on the threat. Critical operations can be disrupted or otherwise sabotaged, sensitive data can be read and copied, and data or processes can be tampered with. A significant concern is that terrorists or hostile foreign states could launch computer-based attacks on critical systems, such as those supporting energy distribution, telecommunications, and financial services, to severely damage or disrupt our national defense or other operations, resulting in harm to the public welfare. Understanding these risks to our computer-based infrastructures and determining how best to mitigate them are major information security challenges. The federal government is beginning to take steps to address those challenges. In 1996, the President’s Commission on Critical Infrastructure Protection was established to investigate our nation’s vulnerability to both cyber and physical threats. In its October 1997 report, Critical Foundations:ProtectingAmerica’sInfrastructures,the Commission described the potential devastating implications of poor information security from a national perspective. In May 1998, Presidential Decision Directive 63 (PDD 63) was issued in response to this report and recognized that addressing computer-based risks to our nation’s critical infrastructures required a new approach that involves coordination and cooperation across federal agencies and among public- and private-sector entities and other nations. PDD 63 created several new entities for developing and implementing a strategy for critical infrastructure protection. In addition, it tasked federal agencies with developing critical infrastructure protection plans and establishing related links with private industry sectors. Since then, a variety of activities have been undertaken, including development and review of individual agency critical infrastructure protection plans, identification and evaluation of information security standards and best practices, and efforts to build communication links with the private sector. In January 2000, the White House released its NationalPlanfor InformationSystemsProtectionas a first major element of a more comprehensive effort to protect the nation’s information systems and critical assets from future attacks. This plan focuses largely on federal efforts being undertaken to protect the nation’s critical cyber-based infrastructures. Subsequent plans are to address a broader range of concerns, including the specific roles industry and state and local governments will play in protecting physical and cyber-based infrastructures from deliberate attacks as well as international aspects of critical infrastructure protection. The end goal of this process is to develop a comprehensive national strategy for critical infrastructure assurance, as envisioned by PDD 63, and to have this plan fully operational in 2003. The plan proposes achieving its twin goals of making the U.S. government a model of information security and developing public-private partnerships to defend our national infrastructure through 10 programs listed in figure 1. The program involving sharing attack warning and information specifically seeks to bolster information exchange efforts with the private sector. In particular, the program aims to establish a Partnership for Critical Infrastructure Security and a National Infrastructure Assurance Council to increase corporate and government communications about shared threats to critical information systems. It also encourages the creation of Information Sharing and Analysis Centers (ISAC) to facilitate public- private sector information sharing about actual threats and vulnerabilities in individual infrastructure sectors. Two ISACs are already in operation: (1) the Financial Services ISAC, which exclusively serves the banking, securities, and insurance industries, and (2) the National Coordinating Center for Telecommunications, which is a joint industry/government organization. Several more ISACs are expected to be established by the end of the year. Partnerships such as the ISACs are central to addressing critical infrastructure protection. However, some in the private sector have expressed concerns about voluntarily sharing information with the government. For example, concerns have been raised that industry could potentially face antitrust violations for sharing information with other industry partners, have their information be subject to the Freedom of Information Act (FOIA), or face potential liability concerns for information shared in good faith. H.R. 4246 was introduced on April 12, 2000, with the aim of addressing these concerns and encouraging the secure disclosure and exchange of information about cyber security problems and solutions. In many respects, the bill is modeled after the Year 2000 Information and Readiness Disclosure Act, which provided limited exemptions and protections for the private sector in order to facilitate the sharing on information on Y2K readiness. In particular, H.R. 4246: protects information being provided by the private sector from disclosure by federal entities under FOIA or disclosure to or by any third party, prohibits the use of the information by any federal and state organization or any third party in any civil actions, and enables the President to establish and terminate working groups composed of federal employees for the purposes of engaging outside organizations in discuss to address and share information about cyber security. In essence, the bill seeks to enable the federal government to ask industry questions about events or incidents threatening critical infrastructures, correlate them at a national level in order to build a baseline understanding of infrastructures, and use these baselines to identify anomalies and attacks—something it is not doing now. Addressing similar concerns proved valuable in addressing the Y2K problem. Although Y2K was a unique and finite challenge, it parallels the critical infrastructure challenge in some important respects. Like critical infrastructure protection, for instance, Y2K spanned the entire spectrum of our national, as well as the global, economy. Moreover, given the scores of interdependencies among private sector companies, state and local governments, and the federal government, a single failure in one system could have repercussions on an array of public and private enterprises. As a result, public/private information sharing was absolutely essential to ensuring compliance in supply chain relationships and reducing the amount of Y2K work. Early on, Y2K information bottlenecks were widespread in the private sector. According to the President’s Council on Year 2000 Conversion,antitrust issues and a natural tendency to compete for advantage made working together on Y2K difficult, if not inconceivable, for many companies. Moreover, the threat of lawsuits had companies worried that they would be held liable for anything they said about the Y2K compliance of products or devices they used or test processes and results for them. Legal considerations also prevented companies from saying anything about their own readiness for date change. Thus, as noted by the council, their business partners, as well as the general public, may have assumed the worst. According to the council, the Year 2000 Information and Readiness Disclosure Act paved the way for more disclosures about Y2K readiness and experiences with individual products and fixes. Several major telecommunications companies, for example, indicated their willingness to share Y2K information with smaller companies who contacted them. And the leaders of the electric power industry began a series of regional conferences for local distribution companies in which they discussed identified problems and solutions, particularly with embedded chips, as well as testing protocols and contingency planning. Moreover, the act helped facilitate the work of the more than 25 sector- based working groups established by the council and other outreach activities. For example, the council and federal agencies were able to establish partnerships with several private-sector organizations, such as the North American Electric Reliability Council, to gather information critical to the nation’s Y2K efforts and to address issues such as contingency planning. Concerned about the lack of information in some key industry areas, the council also convened a series of roundtable meetings in the spring and summer of 1999, which helped to shed light on the status of readiness efforts relating to pharmaceuticals, food, hospital supplies, transit, public safety, the Internet, education, and chemicals. The assessment reports resulting from these and other activities substantially increased the nation’s understanding of the Y2K readiness of key industries. Removing barriers to information sharing between government and industry can similarly enhance critical infrastructure protection. Both government and industry are key components of the infrastructure, both are potential targets for cyber threats, and both face significant gaps in effectively dealing with the threats. As such, both must work together to identify threats and vulnerabilities and to develop response strategies. In particular, by combining information concerning the type of incidents and attacks experienced with the information obtained through federal intelligence and law enforcement sources, the government can develop and share more informative warnings and advisories. In turn, companies can develop a better understanding of the threats facing their particular infrastructures and be better prepared to take appropriate actions to protect their sectors. By addressing private sector concerns about sharing information, H.R. 4246 could have a positive effect similar to the one the Year 2000 Information and Readiness Disclosure Act had in resolving the Y2K problem. At the same time, there are two formidable challenges to making this legislation a success. First, while information sharing is important, the government needs to be sure that it is collecting the right type of information, that it can effectively synthesize and analyze it, and that it can appropriately share its analysis. A significant amount of work still needs to be done just in terms of ensuring that the right type of information is collected. For example, what information is required that will enable the government to detect a nationally significant cyber attack? Will information on intrusions, software anomalies, or reports of significant system failures provide an accurate baseline for making these determinations? Today, officials in the intelligence community do not know with real certainty what constitutes a cyber attack. Further, a 1996 Defense Science Board report stressed that understanding the information warfare process and indications of information warfare attacks will likely require an unprecedented effort to collect, consolidate, and synthesize data from a range of owners of infrastructure assets. The ISACs being established to facilitate public- private sector information sharing can assist in meeting this challenge. However, as noted earlier, only two ISACS are in operation and proposals regarding these centers are presented only in broad terms in the administration’s preliminary National Plan for Information Systems Protection. Once the government is sure that it is asking for the right type of information, it will need effective mechanisms for collecting and analyzing it. Building a common operational picture of critical infrastructures and determining if an attack is underway requires the government to develop capabilities to quickly and accurately correlate information from different infrastructures and reports of security incidents. This is a complex and challenging task in itself. Data on possible threats—ranging from viruses, to hoaxes, to random threats, to news events, and computer intrusions— must be continually collected and analyzed from a wide spectrum of globally distributed sources in addition to sector-based groups. Nevertheless, fusing the right information from the public and private sectors in an operational setting is essential to detecting, warning, and responding to information-based attacks. The National Infrastructure Protection Center (NIPC), located in the Federal Bureau of Investigation, is charged with this mission, but it is not clear whether NIPC has the right tools and resources needed to successfully coordinate information collection efforts with the private sector and to effectively correlate and analyze information received. We are currently engaged in an effort to review this capability. In addition to collecting and analyzing data, the federal government needs to be able to effectively share information about infrastructure threats. Again, NIPC is charged with this responsibility and we are also reviewing its capability with respect to this issue. But, already, results in this area have been mixed. In December 1999, NIPC provided early warnings about a rash of denial-of-service attacks prominently on its website—2 months before the attack arrived in full force–and offered a tool that could be downloaded to scan for the presence of the denial of service code. However, as we recently testified,NIPC had less success with the ILOVEYOU virus. NIPC first learned of the virus at 5:45 a.m. EDT from an industry source, yet it did not issue an alert about the virus on its own web page until 11 a.m.—hours after many federal agencies were reportedly hit. This notice was a brief advisory; NIPC did not offer advice on dealing with the virus until 10 p.m. that evening. The lack of a more effective early warning clearly affected most federal agencies. Only 7 of 20 we contacted were spared widespread infection, which resulted in slowing some agency operations and requiring the diversion of technical staff toward stemming the virus’ spread and cleaning “infected” computers. Moreover, NIPC did not directly warn the financial services ISAC about the impending threat. The second challenge to realizing the goals of H.R. 4246 is that, to truly engage the private sector, the federal government needs to be a model for computer security. Currently, the federal government is not a model. As emphasized in the National Plan for Information Systems Protection, the federal government specifically needs to be able to demonstrate that it can protect its own critical assets from cyber attack as well as lead research and development and educational efforts in the field of computer security. However, audits conducted by GAO and agency inspectors general show that 22 of the largest federal agencies have significant computer security weaknesses, ranging from poor controls over access to sensitive systems and data, to poor control over software development and changes, to nonexistent or weak continuity of service plans. Importantly, our audits have repeatedly identified serious deficiencies in the most basic controls over access to federal systems. For example, managers often provided overly broad access privileges to very large groups of users, affording far more individuals than necessary the ability to browse, and sometimes modify or delete, sensitive or critical information. In addition, access was often not appropriately authorized or documented; users often shared accounts and passwords or posted passwords in plain view; software access controls were improperly implemented; and user activity was not adequately monitored to deter and identify inappropriate actions. While a number of factors have contributed to weak federal information security, such as insufficient understanding of risks, technical staff shortages, and a lack of system and security architectures, the fundamental underlying problem is poor security program management. Agencies have not established the basic management framework needed to effectively protect their systems. Based on our 1998 studyof organizations with superior security programs, this involves managing information security risks through a cycle of risk management activities that include (1) assessing risk and determining protection needs, (2) selecting and implementing cost-effective policies and controls to meet these needs, (3) promoting awareness of policies and controls and of the risks that prompted their adoption, and (4) implementing a program of routine tests and examinations for evaluating the effectiveness of policies and related controls. Additionally, a strong central focal point can help ensure that the major elements of the risk management cycle are carried out and can serve as a communications link among organizational units. I would also like to emphasize that while individual agencies bear primary responsibility for the information security associated with their own operations and assets, there are several areas where governmentwide criteria and requirements also need to be strengthened. Specifically, there is a need for routine periodic independent audits of agency security programs to provide a basis for measuring agency performance and information for strengthened oversight. As we recently testified,a bill has been introduced in the Senate this year—the Proposed Government Information Security Act (S. 1993)—which provides a requirement for such audits. There is also a need for more prescriptive guidance regarding the level of protection that is appropriate for their systems, strengthened central leadership and coordination of information security- related activities across government, strengthened incident detection and response capabilities, and adequate technical expertise and funding. For example, central leadership and coordination of information security- related activities across government is lacking. Under current law, responsibility for guidance and oversight of agency information security is divided among a number of agencies, including the Office of Management and Budget (OMB), which is responsible for developing information security policies and overseeing agency practices; the National Institute of Standards and Technology, which is charged with developing technical standards and providing related guidance for sensitive data; and the National Security Agency, which is responsible for setting information security standards for national security agencies. Other organizations are also becoming involved through the administration’s critical infrastructure protection initiative, including NIPC; the Critical Infrastructure Assurance Office, which is working to foster private-public relationships; and the Federal Computer Incident Response Capability (FedCIRC), which is the central coordination and analysis facility dealing with computer security–related issues affecting the civilian agencies and departments across the federal government. While some coordination is occurring, overall, this has resulted in a proliferation of organizations with overlapping oversight and assistance responsibilities. Absent is a strong voice of leadership and a clear understanding of roles and responsibilities. As we recently testified,having strong, centralized leadership has been critical to addressing other governmentwide management challenges. For example, vigorous support from officials at the highest levels of government was necessary to prompt attention and action to resolving the Y2K problem. Similarly, forceful, centralized leadership was essential to pressing agencies to invest in and accomplish basic management reforms mandated by the Chief Financial Officers Act. To achieve similar results for critical infrastructure protection, the federal government must have the support of top leaders and more clearly defined roles for those organizations that support governmentwide initiatives. In summary, by removing private sector concerns about sharing information on critical infrastructure threats, H.R. 4246 can facilitate private-public partnerships and help spark the dialogue needed to identify threats and vulnerabilities and to develop response strategies. For the concepts in H.R. 4246 to work, however, this legislation needs to be accompanied by aggressive outreach efforts; effective centralized leadership; and good tools for collecting, analyzing, and sharing information. Moreover, the federal government cannot realistically expect to engage private-sector participation without putting its own house in order. Doing so will require concerted efforts by senior executives, program managers, and technical specialists to institute the basic management framework needed to effectively detect, protect against, and recover from critical infrastructure attacks. Moreover, it will require cooperative efforts by executive agencies and by the central management agencies, such as OMB, to address crosscutting issues and to ensure that improvements are realized. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. For questions regarding this testimony, please contact Jack L. Brock, Jr. at (202) 512-6240. Individuals making key contributions included Cristina Chaplain, Michael Gilmore, and Paul Nicholas. (512009) | Pursuant to a congressional request, GAO discussed the proposed Cyber Security Information Act of 2000 (H.R. 4246), focusing on how it can enhance critical infrastructure protection and the formidable challenges involved with achieving the goals of the bill. GAO noted that: (1) by removing key barriers that are precluding private industry from sharing information about infrastructure threats and vulnerabilities, H.R. 4246 can help build the meaningful private-public partnerships that are integral to protecting critical infrastructure assets; (2) however, to successfully engage the private sector, the federal government itself must be a model of good information security; (3) currently, it is not; (4) significant computer security weaknesses--ranging from poor controls over access to sensitive systems and data, to poor control over software development and changes, to nonexistent or weak continuity of service plans--pervade virtually every major agency; (5) and, as illustrated by the recent ILOVEYOU computer virus, mechanisms already in place to facilitate information sharing among federal agencies about impeding threats and vulnerabilities have not been working effectively; and (6) moreover, the federal government may not yet have the right tools for identifying, analyzing, coordinating, and disseminating the type of information that H.R. 4246 envisions collecting from the private sector. |
In May 2009, Congress passed the Weapon Systems Acquisition Reform Act of 2009 (Reform Act) in an effort to improve the way weapon systems are acquired and avoid further cost overruns on such programs. When signing the Reform Act into law, the President stated that its purpose was to limit weapon system cost overruns and that it would strengthen oversight and accountability by appointing officials who will closely monitor the weapons systems acquisition process to ensure that costs are controlled. Four offices were established as a result of the Reform Act: SE, DT&E, CAPE and PARCA. The SE and CAPE offices existed under other organizational titles prior to the Reform Act. Staffing levels, following the Reform Act, remained relatively stable for both of these offices, but some reorganization was necessary to reflect new Reform Act responsibilities. The DT&E and PARCA offices were newly established. The key roles and responsibilities of these four offices as outlined in the Reform Act are explained below: Each of the offices has varying levels of interaction with defense acquisition programs. For example, the SE and DT&E offices have ongoing interaction with acquisition programs throughout development in the form of program reviews, working group activities, and other program meetings. They also coordinate program documents in preparation for major milestone reviews. CAPE issues guidance to programs on how to conduct an analysis of alternatives at the beginning of the acquisition process. The office approves the analysis of alternative study plan that is developed based on its guidance. It then develops independent cost estimates for major milestone reviews and in the event that an acquisition program experiences a Nunn-McCurdy breach. According to PARCA, it assesses all major defense acquisition programs at least once per quarter or when requested by the Under Secretary of Defense for Acquisition, Technology and Logistics, and disseminates this information to senior leaders. The office also interacts with specific programs if they experience a Nunn-McCurdy breach. In these cases, the office assesses program performance not less than semi-annually until 1 year after it receives a new milestone approval. In addition to the new organizational requirements, the Reform Act requires DOD to ensure that the acquisition strategy for major defense acquisition programs includes measures to ensure competition or the option of competition throughout the program life cycle. This could include strategies such as maintaining two sources for a system (dual-sourcing) and breaking requirements for supplies or services previously provided or performed under a single contract into separate smaller contracts (unbundling of contracts). Major defense acquisition programs are also required to provide for competitive prototyping—where two or more competing teams produce prototypes before a design is selected for further development—prior to Milestone B unless a waiver is properly granted by the milestone decision authority, and to meet the following Milestone B certification requirements, including: Appropriate trade-offs among cost, schedule, and performance objectives have been made to ensure the program is affordable; A preliminary design review and formal post-preliminary design review assessment have been conducted and on the basis of such assessment the program demonstrates a high likelihood of accomplishing its intended mission; Technology has been demonstrated in a relevant environment on the basis of an independent review and assessment by the Assistant Secretary of Defense for Research and Engineering; Reasonable cost and schedule estimates have been developed to execute, with concurrence of the Director of CAPE, the program’s product development and production plan; Funding is available to execute the program’s product development DOD has completed an analysis of alternatives with respect to the The Joint Requirements Oversight Council has approved program requirements, including an analysis of the operational requirements. The Reform Act also requires the Joint Requirements Oversight Council to ensure trade-offs among cost, schedule, and performance objectives are considered for joint military requirements. GAO previously reported that the Council considered trade-offs made by the military services before validating requirements, but the military services did not consistently provide high-quality resource estimates to the Council for proposed programs in fiscal year 2010. We also found that the Council did not prioritize requirements, consider redundancies across proposed programs, or prioritize and analyze capability gaps in a consistent manner. DOD has implemented most of the fundamental Reform Act provisions as required and is taking additional steps to strengthen acquisition reviews, policies, and capabilities. Offices established as a result of the Reform Act are continuing to issue policies, review and approve relevant acquisition documents, monitor weapon acquisition program activities, and develop performance measures. In addition, all four of the major defense acquisition programs we reviewed that had not started development when we selected our case studies plan to implement Reform Act provisions regarding preliminary design reviews, competitive prototyping, and competition. Also, some provisions, such as issuance of guidance on estimating operating and support costs, by the CAPE, are still in the process of being completed. Finally, we found that the Under Secretary of Defense for Acquisition, Technology and Logistics has revised the defense acquisition review process to consider additional knowledge collected on programs earlier and efforts are being made to strengthen acquisition policies and capabilities. The offices established as a result of the Reform Act—SE, DT&E, CAPE, and PARCA—are continuing to make progress in implementing four fundamental Reform Act provisions aimed at strengthening acquisition outcomes and oversight of weapon acquisition programs. Specifically, the offices are (1) developing policy and guidance to the military services for conducting work in their respective areas, (2) approving acquisition documents prior to milestone reviews, (3) monitoring and assessing weapon acquisition program activities on a consistent basis, and (4) developing performance measures to assess acquisition program activities. Figure 1 provides the status of DOD efforts to implement the four fundamental provisions. Some offices are still in the process of completing a few of these provisions. For example, CAPE and PARCA are in the process of developing policies and guidance for their respective areas and DT&E is in the process of establishing performance measures that can be used to assess weapon acquisition program activities. The office piloted the performance measures on two major defense acquisition programs and reported that they are currently applying them to over 40 programs. Note that some activities related to approving documents and monitoring or assessing programs require on-going efforts on the part of some of the offices. We also found evidence that major defense acquisition programs are integrating Reform Act provisions in their acquisition strategies. The four weapon acquisition programs we reviewed that had not started development activities when we began our review plan to implement Reform Act provisions related to preliminary design reviews, competitive prototyping, and competition. For example, the Ground Combat Vehicle has two contractors developing competitive prototypes of two key subsystems to support technology development. The program intends to conduct preliminary design reviews on both contractors’ designs prior to Milestone B and to conduct full and open competition through Milestone C. Similarly, according to program officials, the Joint Light Tactical Vehicle program had three contractors develop full-system prototypes during the technology development phase and held preliminary design reviews on each contractor’s design prior to Milestone B. The program plans to continue competition throughout engineering and manufacturing development. None of the four programs in our review received a waiver from Reform Act provisions. OSD is taking additional steps to strengthen the department’s oversight of weapon acquisition programs and guidance for developing the programs. In June 2011, for example, the Under Secretary for Acquisition, Technology and Logistics revised the weapons acquisition review process to consider acquired knowledge on weapon acquisition programs earlier than before. The revised review process includes two new review points. The first new review—the pre-engineering and manufacturing development review—occurs before the release of a final request for proposal for the engineering and manufacturing development phase. The purpose of this new review is to assess each program’s acquisition strategy, request for proposal, and key related planning documents earlier in the process, and to determine whether program plans are affordable, executable, and reflect sound business arrangements. The second new review—the acquisition strategy and request for proposals review and approval—occurs prior to Milestone C, the production decision. The review provides the milestone decision authority an opportunity to review the acquisition strategy and request for proposals for the production and deployment phase prior to Milestone C. Figure 2 illustrates the revised review process. According to the Under Secretary of Defense for Acquisition, Technology and Logistics, who is the authority for making milestone decisions for most major weapon acquisitions, the prior review process did not provide an adequate opportunity for review of program plans prior to release of the final request for proposals—the point at which DOD’s requirements, schedule, planned program content, and available funding should be firm and available for review. Further, the Under Secretary stated that making changes to acquisition strategies and program plans after all bidding activities, proposal evaluation, and source selection are complete is difficult and highly disruptive. DOD is also rewriting the DOD Instruction 5000.02 to include an extensive restructure of acquisition policies according to the Under Secretary of Defense for Acquisition, Technology and Logistics. This update will implement Section 832 of the National Defense Authorization Act for Fiscal Year 2012, which requires DOD to issue guidance on actions to be taken to improve its processes for estimating, managing, and reducing operation and support cost, as well as ensure competition in maintenance and sustainment of subsystems of major weapon systems, among other things. In addition to current policies implementing the Reform Act, officials stated that key provisions from the Reform Act will also be included in the updated instruction. In addition to implementing the provisions of the Reform Act, DOD offices have taken other steps to strengthen acquisition capabilities throughout the department. For example: The SE office, according to DOD officials, led efforts to establish working groups to help the services address systemic reliability issues across the unmanned aircraft and rotary wing portfolios earlier in the process. The office also led several workforce development initiatives to attract and retain a qualified engineering workforce and supported the implementation of legislation requiring each acquisition program office to name a key technical advisor who is responsible for all engineering activities. The DT&E office, according to DOD officials, championed updates to DOD Instructions that will require weapon acquisition programs to consider using DOD test capabilities before paying contractors to develop similar capabilities. In addition, the office supported legislation requiring major defense acquisition program offices to have a government test agency serving as the lead developmental test and evaluation organization for the program and a chief developmental tester. The chief developmental tester position, as included in the National Defense Authorization Act for Fiscal Year 2012, is tasked with coordinating the planning, management, and oversight of all developmental testing activities, among other things. The CAPE office, according to DOD officials, established an operating and support cost directorate to build its expertise and place more emphasis on developing better operating and support cost estimates throughout the acquisition life cycle. This directorate will coordinate the development of an operating and support cost estimating guidebook. The PARCA office, according to DOD officials, is providing additional insights to the Under Secretary of Defense for Acquisition, Technology and Logistics on systemic acquisition problems. Specifically, the office is examining a wide range of acquisition-related information from the past 40 years, such as contract type, stability of key performance parameters, and program manager tenure to determine if there is any statistical correlation between these factors and good or poor acquisition outcomes. We identified four key areas where the Reform Act had a significant influence on programs in the 3 years since it was enacted: (1) requirements, (2) cost and schedule, (3) testing, and (4) reliability. These four areas have been common sources of problems in the past. For example, the services typically started new weapon acquisition programs with requirements that were both demanding and inflexible and planned to use relatively unproven technologies to meet the requirements—all of which increased program risks. In addition, cost and schedule estimates were frequently too optimistic based on the proposed requirements and technologies. Design problems stemming from rigid requirements and the use of immature technologies to meet them were often discovered during testing and fixed late in the development cycle and resulted in cost increases, performance shortfalls, and schedule delays. Finally, DOD’s inattention to reliability has resulted in a dramatic increase in the number of systems that have not met suitability requirements during operational testing. Deficiencies—such as high failure rates and disappointing improvements in the reliability, availability, and maintainability of weapon systems—have limited program performance and increased operation and support costs. We examined 11 programs at various stages of the acquisition process to determine how the offices and policies established as a result of the Reform Act impacted their acquisition strategies and decision-making process. Four programs had not yet passed Milestone B, development start, at the time we began our review. Of the remaining seven programs, three had breached Nunn-McCurdy cost thresholds since the act was passed and have had to satisfy the Reform Act’s new requirements with regards to certification. The other programs had significant interaction with one or more of the OSD offices established by the Reform Act. Table 3 indicates in which of the four areas each program has been affected by the Reform Act. In some cases, programs have made changes based on input from OSD offices like systems engineering or developmental test and evaluation; in other cases, programs have integrated Reform Act policies, such as preliminary design reviews and competitive prototyping, into their acquisition planning. Programs that were already in development or production when the Reform Act was passed were less likely to have interactions with the OSD offices on requirements trades because these discussions typically occur prior to Milestone B. A discussion of how individual programs have been affected in the areas of requirements, cost and schedule realism, testing, and reliability follows. The Reform Act places significant emphasis on early problem solving and requires programs to put much more effort toward considering trade-offs among cost, schedule, and performance requirements prior to Milestone B. As part of this effort, it requires the Secretary of Defense to ensure that acquisition, budget, and cost estimating officials have the opportunity to raise cost and schedule matters before performance objectives are established. The Reform Act also charges the Joint Requirements Oversight Council with the responsibility to ensure that cost, schedule, and performance trade-offs for joint military requirements are considered, and to include combatant commanders in the process to ensure the user’s needs are adequately satisfied. The offices established as a result of the Reform Act have helped programs, such as the Joint Light Tactical Vehicle and Ground Combat Vehicle, make trade-offs among cost, schedule, and technical performance requirements. As a result, these programs have developed a more realistic acquisition strategy from a cost, schedule, and technical standpoint. Joint Light Tactical Vehicle: The program held several reviews prior to Milestone B to identify, modify, or eliminate requirements that were unachievable or unaffordable, thus establishing a more technically realistic program. Officials from DT&E and SE participated in these reviews. By involving both the requirements and acquisition communities in the reviews, the Army was able to reduce the required capability to cut costs while ensuring that trade-off decisions would not impair the system’s ability to meet the warfighter’s operational needs. Examples of requirements changes that helped to cut costs as well as reduce risk include: allowing the active suspension system, crew displays, and integrated starter-generator to be tradable design features. These changes resulted in a 30 percent reduction in the average unit manufacturing cost from the initial target of $475,000 to $331,000, while at the same time reducing technical and weight risk. According to program officials, this makes the $250,000 unit manufacturing goal more achievable. reducing the reliability requirement and changing the Army helicopter lift requirement based on the results of technology development prototype testing. This mitigated technical risks going into development. The program recently moved into engineering and manufacturing development, but not all requirements issues have been resolved and future trade-offs may be necessary. For example, early testing showed that none of the three prototype variants met the program’s soft soil or sand slope requirement. This requirement has not been changed. Program and OSD officials are monitoring this issue closely and plan to actively manage it during engineering and manufacturing development. Ground Combat Vehicle: The Ground Combat Vehicle program exhibited some of the same problems experienced by previous DOD programs prior to Milestone B—demanding and inflexible requirements. The SE office and the Under Secretary of Defense for Acquisition, Technology and Logistics are helping the program set achievable requirements. Following its materiel development decision in February 2010, the program issued a request for proposals that contained nearly 1,000 requirements and a challenging 7-year schedule for the delivery of the first production vehicle. At the request of the Under Secretary for Acquisition, Technology and Logistics, the Army established an independent review team to assess the risks associated with the program’s schedule. The team, which included an SE official, raised concerns about the program’s high number of mandatory requirements and the risks associated with the 7-year schedule. To mitigate program risks, the Army reduced the number of performance requirements by about 25 percent and prioritized the others, giving competing contractors flexibility in addressing some performance requirements. The Army issued a revised request for proposal in November 2010. In August 2011, the Under Secretary of Acquisition, Technology and Logistics approved the program’s entry into technology development, but expressed concern about the cost and schedule risks associated with delivering a production vehicle in 7 years. Because of these concerns, the Under Secretary directed the Army to consider other alternatives, such as existing vehicles, that could meet warfighter needs. The analysis is currently planned to be completed in March 2013 to inform the Milestone B decision. By establishing a new cost assessment and program evaluation office and requiring this office to scrutinize program cost and schedule estimates beginning at Milestone A, CAPE officials believe that the Reform Act has helped infuse more realism in cost estimates and promote earlier discussions about affordability. CAPE officials also believe that because their independent cost and schedule estimates have become more visible within DOD and Congress, the military services are developing more realistic estimates. We saw evidence of these benefits in the programs we reviewed, including the Ohio Class Replacement, Littoral Combat Ship Seaframe, and F-35 Joint Strike Fighter programs. Ohio Class Replacement: The CAPE office was involved in the decision-making process to ensure program affordability. The office prepared an independent cost estimate and reviewed the program’s affordability goals prior to Milestone A. The service and independent estimates were within 2 percent of each other. However, the Under Secretary of Defense for Acquisition, Technology and Logistics directed the Navy to do a rigorous cost comparison of a 16 missile tube design versus a 20 missile tube design. The Navy determined that a 16 missile tube configuration would meet warfighter requirements and users’ needs while reducing program costs by about $200 million per submarine, or approximately $3 billion for the total program. It would also simplify the ships’ design and integration effort. The CAPE office validated the savings associated with the 16 missile tube design. As a result, the Navy incorporated the 16 missile configuration as the program baseline. Littoral Combat Ship Seaframe: The CAPE office helped make program costs more visible. Prior to the program’s Milestone B decision, CAPE completed an independent cost estimate of the seaframe program and found that the resources in the future years’ defense plan budget were lower than the projected program costs for the same time period. Navy officials attributed this problem to the overlap between the timing of the milestone decision and the president’s budget submission. The office further noted that the resources in the Navy’s budget did not include the additional development activities required to support two full ships. Without this information, decision makers would not have had visibility into the expected costs of the seaframe program or be able to make more fully informed decisions. As a result, the Navy re-phased its funding in the budget, adding approximately $397 million to fully fund the development program. Joint Strike Fighter: The SE and CAPE offices helped the program develop more realistic cost and schedule estimates. CAPE officials have been involved in reviews of the F-35 Joint Strike Fighter program even prior to the passage of the Reform Act and have continued to be heavily involved in subsequent programs reviews. For example, the cost analysis improvement group, which was the predecessor to CAPE, led a multi-functional joint estimating team review of the program in 2008. This review found problems with the program’s funding and schedule. In 2010, the Joint Strike Fighter program notified Congress that its estimated unit costs had increased by more than 80 percent since the original Milestone B baseline in 2001. This increase triggered a Nunn-McCurdy unit cost breach and later prompted the program executive officer to commission a technical baseline review of the program to help determine the resources needed to complete development. Officials from the SE office participated in this technical review. The CAPE office also did an independent cost estimate of the program as part of the Nunn- McCurdy certification process. Based on information from these efforts, DOD developed a more realistic program plan by adding $4.6 billion to the development program, reducing near-term procurement quantities by 125 aircraft, and extending the development test period by 4 years to accommodate developmental testing, address the increased program scope, and fix software issues. The Reform Act significantly strengthened the role of developmental testing in the department. In the 2 decades prior to the Reform Act, the prominence of developmental testing had declined within OSD. In the early 1990’s, developmental testing was part of an all-encompassing test organization that reported directly to the Under Secretary of Defense for Acquisition. According to a former senior developmental testing official, by 2004, two people worked on developmental testing activities within the systems engineering organization. In establishing a separate office for DT&E, the Reform Act reinforced the need for robust developmental testing early in the acquisition process. The Reform Act gave the Deputy Assistant Secretary for Developmental Test and Evaluation formal approval authority for the test and evaluation master plans of major defense acquisition programs. This authority enables the Deputy Assistant Secretary to help ensure that programs have robust test and evaluation plans. Our case study reviews illustrate the efforts that the DT&E office has made to help programs such as the Small Diameter Bomb II and KC-46 Tanker obtain more design and performance knowledge early in a program’s acquisition life cycle. Small Diameter Bomb II: Air Force program officials acknowledged that developmental and operational testing officials worked closely with them as they prepared the test and evaluation master plan for the Milestone B decision. After reviewing the plan, developmental and operational test officials concluded that the program would benefit from adding a 28-shot test program prior to entering operational testing. According to the program office, the purpose of the additional testing is to further establish the performance of the weapon in realistic scenarios and to increase the likelihood of completing operational testing without a failure. DT&E officials stated this testing would provide more complete knowledge about the bomb’s functionality and help reduce risk of a major redesign moving forward. Program officials stated that they allocated an additional $41 million to its developmental test program to conduct the 28 additional flight tests prior to operational testing. KC-46 Tanker: The program office acknowledged that the DT&E office, as part of an integrated test team comprised of government and industry officials, helped identify options that could add time to the test plan for important testing if unexpected delays are encountered. In its fiscal year 2011 annual report, the operational test and evaluation office reported that the KC-46’s planned flight test program was not executable, determining that more time would be needed for military flight-testing. It based this conclusion on the historical flight test experience of similar programs. Program officials stated that they were initially reluctant to change the test plan because they had awarded a fixed price contract and any changes could result in reopening the contract, leading to potential cost increases. However, the integrated test team identified a recovery period that may be applied to the KC-46 aerial refueling certification if delays are encountered. The contractor now has a plan that could allocate an additional 1.5 months for two test aircraft to complete this testing, if deemed necessary. This testing would provide more knowledge about the program’s aerial refueling performance prior to operational testing. DT&E officials stated that they plan to continue working with the program to address overall flight test challenges. While testing remains one of the program’s risk areas, this change may lessen that risk. The Reform Act emphasizes the need for designing more reliable weapon systems. It charges the Deputy Assistant Secretary for Systems Engineering with the responsibility to ensure the systems engineering approach used by major acquisition programs includes a robust plan for improving reliability. The DT&E office reviews programs’ reliability growth test plans. This testing provides visibility over how reliability is improving and uncovers design problems so fixes can be incorporated before production begins. A reliability growth curve is used to track projected and actual improvements in reliability over time. The Reform Act further requires that the Deputy Assistant Secretary for Systems Engineering develop policies and guidance for the inclusion of provisions relating to systems engineering and reliability growth in requests for proposals. We observed evidence of this increased emphasis in the Joint Light Tactical Vehicle, Remote Minehunting System, Gray Eagle, and Global Hawk programs. Joint Light Tactical Vehicle: The DT&E office helped this program develop a more realistic reliability growth plan prior to Milestone B. Based on the performance of prototype vehicles, developmental test officials determined that the program’s reliability growth curve was unrealistic. For example, officials reported that the program’s initial reliability growth plan assumed a starting reliability that was almost 60 percent higher than what had actually been demonstrated during technology development. It also assumed commonality between the two vehicle variants, a large reliability increase in a short test time, and two corrective action periods. The DT&E office recommended that the program eliminate the vehicle commonality assumption, add more test miles, and add another corrective action period to its test plan. It also recommended that the program consider lowering the vehicle’s reliability requirement. Based on this input, the Army revised its plan by adding two vehicles and 40,000 more test miles to ensure reliability is adequately addressed for both variants. With approval of the user, it also reduced the reliability requirement from 3,600 to 2,400 miles mean time between operational mission failures. Remote Minehunting System: The SE office worked with program officials to improve reliability growth planning, which was found to be one of the key factors leading to the program’s Nunn McCurdy unit cost breach in 2009. Before the breach, program officials had not funded a reliability growth program or established a design for reliability process. The program had a reliability goal of 150 hours mean time between failures, but program officials stated that testing demonstrated a reliability of only about 45 hours. Since the breach, the program has worked closely with the SE office to establish a reliability program plan and a growth curve to track reliability improvements. During the Nunn-McCurdy review, the program developed a three phase reliability growth program to improve the program’s subsystems, components, and manufacturing processes that contributed to poor reliability. According to program officials, phase one of the reliability growth program was completed in 2011, and reliability has improved by 40 percent, going from 45 hours mean time between operational mission failures to 63 hours. Although this improvement is still below the minimum requirement of 75 hours, program officials stated that phase two of the reliability growth program is scheduled to be completed in April 2013 and is projected to achieve the program’s 75 hour minimum requirement. Gray Eagle and Global Hawk: The SE office has worked to improve reliability across the unmanned aircraft portfolio, including the Gray Eagle and Global Hawk. Prior to the Gray Eagle’s second low rate initial production decision in 2011, SE officials raised concerns about the system’s poor reliability. As a result, the Army was directed to undertake a reliability improvement program. The Under Secretary of Defense for Acquisition, Technology and Logistics approved the program for low rate initial production, but stressed the need to improve the operational reliability as quickly as possible. SE officials worked with program officials to establish a reliability working group, develop reliability growth curves, and develop a reliability enhancement management plan. According to SE officials, the Gray Eagle initially improved the reliability of the aircraft by 15 percent and the ground control station by 30 percent. According to program officials, the initial reliability goals were overstated and not needed to meet the program’s overall operational availability requirement. Based on initial operational test results in August 2012, the program office is working with the user to redefine the reliability goals without impacting the system’s ability to meet its overall operational availability requirement. According to the PARCA office, these efforts have been informed by a detailed reliability model that they built in consultation with the Army. This model showed the relationship between the aircraft’s reliability and its availability to perform operational missions. SE officials also found similar reliability problems on the Global Hawk program and worked with program officials to establish a reliability growth and improvement plan and reliability growth curves. According to SE officials, the time between unscheduled maintenance on the Global Hawk has improved on the order of 50 to 80 percent. While DOD has taken steps to implement most of the fundamental Reform Act provisions, some key efforts to date have been primarily focused on DOD’s largest major defense acquisition programs. Expanding the reach of the Reform Act to bring about systemic change to DOD’s weapons acquisition process so that it influences all programs, however, still has challenges. Although senior leaders were receptive to the Reform Act principles, they identified several challenges that currently limit DOD’s ability to broaden the Reform Act’s influence. We grouped these challenges into five general categories: (1) organizational capability constraints; (2) need for additional guidance on cost estimating and Reform Act implementation; (3) uncertainty about the sufficiency of service level systems engineering and developmental testing resources; (4) limited dissemination of lessons learned; and (5) cultural barriers. Leaders of two of the offices established as a result of the Reform Act told us that even though they have implemented most of the fundamental Reform Act provisions, they have had to limit their activities to a portion of acquisition programs in their portfolios due to resource constraints. Thus, it is doubtful that they could expand the scope of their activities to include more weapon acquisition programs at current staffing levels. For example, the DT&E office has had to be selective in its level of oversight of acquisition programs because the current staff of around 70 government and contractor personnel cannot adequately cover a portfolio of over 200 acquisition programs, according to its Deputy Assistant Secretary. The office has dropped virtually all but the largest programs from its oversight list and eliminated oversight of some major automated information systems. CAPE officials estimated that its cost assessment division would need to double in size in order to meet the Reform Act’s requirements. However, soon after the Reform Act was enacted, budgetary constraints limited the expansion of the cost estimating workforce to about 25 percent of the necessary growth. According to CAPE officials, its current cost analysis staffing is not adequate to meet its mission of improving the analytical skills of the defense cost estimating workforce, issuing policy, and providing sound and unbiased cost and schedule estimates. The office has delegated its independent cost estimating responsibility for most major automated information systems to the military services and some guidance has yet to be issued. The SE and PARCA offices are also struggling in some regards. For example, according to its Deputy Assistant Secretary, the SE office is continuously challenged to maintain the high caliber, qualified personnel required to provide assistance to and oversight of its portfolio of over 200 acquisition programs. Further, PARCA officials stated that the availability of government positions, particularly at the senior executive service level, continues to be a critical issue for the office. The two divisions within the PARCA office, the performance assessments division and the root cause division, do not currently have permanent government personnel at the senior executive level. Officials also stated current proprietary information rules limit the ability of PARCA contractor personnel to handle and maintain some weapon system information, severely impeding operations. Offices within OSD have not yet issued more detailed guidance that could help institutionalize better cost estimating practices and steer program decisions related to competitive prototyping and preliminary design reviews. The CAPE office has not issued guidance for operating and support costs estimates, such as fuel and maintenance costs, that have been estimated to account for two-thirds or more of a system’s total life cycle cost. In addition, although not specifically required by the Reform Act, the CAPE office has not issued guidance for the services to use when developing Milestone A program cost estimates. As a result, senior leaders may not have access to realistic cost estimates prior to Milestone B for decision making purposes. Military service officials told us they are particularly interested in getting guidance on data that should be included in the cost analysis requirements description, which forms the basis for their cost estimates at Milestones B and C. CAPE officials recognize that, while some progress has been made, they need to complete the guidance, but have not been able to dedicate resources to do so. Some officials also told us that they find the competitive prototyping and preliminary design review requirements confusing and would like guidance on how to implement these requirements. DOD policy requires the technology development strategy for major defense acquisition programs to provide for prototypes of the system or, if a system prototype is not feasible, for prototypes of critical subsystems before Milestone B approval. However, officials from the Ground Combat Vehicle program were unclear as to when and what type of prototype to use. From a broader perspective, other military officials questioned the value of competitive prototyping as a blanket requirement for all programs, especially for programs that are using mature technologies, given the cost. For example, senior acquisition officials questioned the necessity of spending $400 million on competitive prototyping for the Small Diameter Bomb II program since the program was aware of problems with one contractor’s design. However, program officials indicated that competitive prototyping enabled them to identify design issues early in development and realize a savings of $1 billion. Officials from the Ground Combat Vehicle program we spoke with also indicated that they struggled with the timing of when to hold the program’s preliminary design review and what type of knowledge was required, since better guidance is needed. The program plans to hold multiple design reviews prior to Milestone B to consider contractor and government designs of the weapon system and then hold another review after Milestone B in order to resolve differences between the government’s and selected contractor’s preliminary designs. We spoke with OSD officials to determine which office should be providing guidance or assistance to program managers on competitive prototyping and preliminary design review issues. None of the offices have official responsibility for these efforts. OSD officials stated that these are program decisions and should be discussed with their respective military service level acquisition officials. OSD officials believe that the services may lack resources in key positions that could help strengthen systems engineering and developmental testing activities on weapon acquisition programs. For example, according to the Deputy Assistant Secretary of Defense for Systems Engineering, the Navy and Air Force have reassigned the duties and responsibilities of their service-level chief engineers, thereby de- emphasizing the importance of systems engineering. The Deputy Assistant Secretary believes maintaining strong systems engineering leadership at the service level is essential for tying the systems engineering community together and promoting good systems engineering practices throughout each respective service. According to the DT&E and SE office’s March 2012 joint annual report to the Congress, the Navy abolished its chief engineer position and while the Air Force recently began to take steps to relocate the systems engineering function to the headquarters level, the impact of a recent reorganization on systems engineering activities is not yet known. In addition, the Deputy Assistant Secretary for Developmental Test and Evaluation expressed concern that the military services may not be implementing new legislation that requires each major defense acquisition program be supported by a chief developmental tester that oversees developmental test and evaluation activities. He stated that in some cases one person is serving as the chief developmental tester across multiple programs instead of having one person dedicated specifically to each program. The Deputy Assistant Secretary is trying to determine the extent to which this practice is occurring and then plans to work with the services to get more focused leadership for each program. It is also unclear whether the services have a sufficient number of qualified personnel to conduct systems engineering and test and evaluation activities. The services planned to grow these workforces by over a combined 5,000 people between fiscal years 2009 and 2015 and had made progress in growing each of these workforces through fiscal year 2010. However, budget cuts have resulted in DOD canceling some of its weapon acquisition programs and reassessing its decision to increase the acquisition workforce. Last year, we recommended that the Secretary of Defense report the impact budget cuts were having on the military service workforce and their ability to meet weapon acquisition program needs in the areas of developmental testing and systems engineering. In the DT&E and SE offices’ March 2012 joint annual report to the Congress, the Deputy Assistant Secretary for Systems Engineering reported that the Army has reduced its systems engineering workforce growth plan as compared to the plan reported in March 2011 joint annual report, and that contractor-to-civilian conversions have been suspended. In addition, the Deputy Assistant Secretary believes a prolonged hiring freeze in the Air Force could potentially create new experience gaps in the workforce. The Deputy Assistant Secretary for Developmental Test and Evaluation did not discuss the impact of budget cuts on the services’ test and evaluation workforce growth plan in the March 2012 joint annual report to the Congress and neither office reported on whether the services had an adequate workforce to meet the needs of the current portfolio of weapon acquisition programs. DOD has not taken full advantage of sharing lessons learned obtained through root cause analyses of programs that experience Nunn-McCurdy cost and schedule breaches with the acquisition workforce, particularly program managers. According to the Defense Acquisition Guidebook, which provides best practices the acquisition workforce can use on programs, lessons learned are a tool that the program manager may use to help identify potential areas of risk associated with a weapon acquisition system by reviewing the experiences encountered in past programs. Lessons learned databases document what worked and what did not work in past programs, in the hopes that future programs can avoid the same pitfalls. Further, if the right best practices are applied, they help to avoid common problems and improve quality, cost, or both. The PARCA office has made some effort to educate program managers on how to avoid acquisition problems through classes taught at the Defense Acquisition University. However, these courses are geared towards educating new program managers and may not be reaching a wide range of program officials. Nevertheless, some officials indicated that this information would be helpful for program officials to understand and avoid problems that have affected weapon acquisition programs in the past. Other officials also stated that it would be helpful if root cause analysis assessments contained more detailed information so acquisition officials could better understand problems and apply lessons learned. For example, when cost estimating was determined to be a root cause of a problem, officials stated they would have found it more beneficial to know if immature technologies or unrealistic requirements were the basis for the poor cost estimate. Perhaps the most difficult challenge the department faces in making systemic changes to the acquisition process is changing the cultural relationship between the military services, which fund and develop new weapon acquisition programs, and OSD offices, which provide advice to and oversee the programs. Senior military service officials have told us they believe they understand and can manage the risks of specific weapon acquisition programs without much assistance from OSD. On the other hand, OSD officials believe more assistance is needed, as evidenced by the high number of programs that have experienced Nunn- McCurdy breaches and poor operational testing results. For example, since it was established in 2009, the DT&E office has assessed whether 15 programs were ready to begin operational testing. The office recommended that 5 of the programs—Global Hawk Block 20/30, Standard Missile 6, Joint Tactical Radio System Handheld Manpack Small Form (HMS) Rifleman Radio, Joint Tactical Radio System HMS Manpack, and MQ-1C Gray Eagle—not proceed into operational testing. However, military service acquisition chiefs decided to allow all 5 of these programs to proceed anyway. Four of the programs—Global Hawk Block 20/30, Standard Missile 6, Joint Tactical Radio System HMS Rifleman Radio and Joint Tactical Radio System HMS Manpack—demonstrated poor performance in operational testing, in areas such as reliability, effectiveness, or suitability. Operational testing results for the MQ-1C Gray Eagle have not yet been reported. On the other hand, a few service officials we met with were reluctant to accept some recommendations made by OSD offices because they believed the recommendations were overly burdensome and could significantly impact weapon acquisition programs’ cost and schedule outcomes without a lot of benefit. This was the case for the KC-46 Tanker program, where program officials were concerned that additional testing recommended by the developmental test and evaluation, and operational testing offices, as part of the integrated test team, could have significant contractual implications during development. In this case, officials identified additional flight test opportunities without having to renegotiate the fixed price contract. However, the additional allotted test time is not equivalent to the 6 to 8 months the developmental testing office felt should be added. A similar situation occurred on the Ship to Shore Connector program. The Navy disagreed with a DT&E office recommendation to conduct full system testing prior to procuring additional craft during initial production. The DT&E office believed the program was high risk because the Ship to Shore Connector was a complete redesign of a previous system with no reuse of any major component (engines, gearboxes, hydraulics, command and control software). Navy officials, however, believe the program is low risk since it is an evolutionary program and has one critical technology, a fire suppression system, which has already been sufficiently demonstrated and qualified through test and evaluation. In addition, the Navy estimated that it would cost $15 million to revise the existing production schedule to accommodate the full system testing as recommended by the DT&E office. The DT&E office and the Navy reached a compromise whereby OSD would review available system test results before more craft are authorized. Current fiscal pressures, along with the threat of more to come, have DOD officials looking for ways to increase buying power by controlling cost and schedule overruns on weapon acquisition programs. The offices established as a result of the Reform Act, as well as policy provisions have helped DOD make inroads towards putting weapon acquisition programs on more solid footing. Together, the offices and policy provisions place more attention on requirements, costs, testing, and reliability as early as Milestone A. The provisions of the act, when specifically focused on newer programs, are having a positive impact on the programs and the acquisition process. They show that expert attention to the cost and achievability of capability requirements, the assumptions made for cost and funding of programs, and the amount of systems engineering knowledge that is brought to bear early make programs more executable. Although senior officials we spoke with throughout the department are receptive to the broad principles of the Reform Act, it is too early to tell if the Reform Act is going to result in systemic change to DOD’s weapon acquisition process. DOD faces several challenges that must be addressed to get lasting change—organizational capability constraints, the need for additional cost estimating and implementation guidance, the possibility of insufficient systems engineering and developmental testing resources, limited dissemination of lessons learned, and cultural barriers between OSD and the services. Some challenges appear to be straight forward to address, such as providing guidance for estimating operating and support costs, providing additional guidance for conducting preliminary design reviews and competitive prototyping activities, and disseminating lessons learned to the broader acquisition community. However, they may require more resources, which have been difficult to obtain. For Reform Act policies and practices to have a systemic effect across the entire portfolio of weapon system acquisition programs, the department must also address challenges related to systems engineering and developmental testing resources and cultural barriers between OSD and the services. This begins with the services identifying key leaders at the headquarters level and within program offices to guide systems engineering and developmental testing efforts and then ensuring that there are enough trained staff to carry out these activities. OSD will need to continue monitoring the services’ efforts. It will also require an environment where the services stop proposing new weapon systems with inflexible requirements, immature technologies, and cost, schedule, and funding assumptions that are too optimistic at the start of a program. Breaking down cultural resistance to change will take more cooperation between the Under Secretary for Acquisition, Technology and Logistics and other OSD offices, and service acquisition executives to address, as well as continuity of leadership. Efforts by the PARCA office to identify factors that correlate to good or poor acquisition outcomes, particularly as it relates to program manager tenure, will be beneficial. The services’ ability to demonstrate that the Reform Act is influencing all weapon acquisition programs, not just the biggest, will be a key indicator for determining whether the Reform Act has had a positive effect on DOD’s culture. We recommend that the Secretary of Defense take the following four actions to enable systemic change across the entire portfolio of weapon acquisition programs: direct the Director of Cost Assessment and Program Evaluation to issue guidance for estimating weapon acquisition program costs at Milestone A and operating and support costs throughout the acquisition life cycle by the end of fiscal year 2013 and ensure that the office prioritizes its resources accordingly to accomplish this task; designate responsibility for providing advice and guidance to program offices on competitive prototyping and preliminary design reviews to the appropriate organization within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics and ensure that the guidance is developed. The office(s) designated would be the focal point for addressing program office issues related to the practical implementation of these Reform Act provisions, such as the type of competitive prototyping to use, the timing and benefits of holding preliminary design reviews prior to milestone B, and if a preliminary design review should be held after milestone B; direct the Deputy Assistant Secretaries of Defense for Systems Engineering and Developmental Test and Evaluation to assess and include in their annual report to the Congress beginning with the report on fiscal year 2012 activities: the extent to which the office can perform their required activities with allocated resources; the impact budget cuts are having on the military services total workforce (civilians, military, and contractors) and ability to meet program office needs; and progress the services have made filling leadership positions, such as chief engineers at the service level and technical leads for systems engineering and developmental testing at the program office level; direct the Director of Performance Assessments and Root Cause Analyses to make lessons learned collected during its root cause analysis evaluations available to the acquisition workforce and ensure that the office prioritizes its resources accordingly. DOD provided us written comments on a draft of this report. DOD concurred with two recommendations and partially concurred with two others. DOD’s comments appear in appendix III. DOD also provided technical comments, which we incorporated as appropriate in the report. DOD agreed with the intent of our first recommendation, but noted that due to resource constraints, the Cost Assessment and Program Evaluation office could not guarantee that it would be able to issue guidance for estimating major defense acquisition program costs at Milestone A and operating and support costs throughout the acquisition lifecycle by the end of fiscal year 2013. We continue to believe that the Cost Assessment and Program Evaluation office should issue the guidance by the end of fiscal year 2013. However, if that is not possible from a resource standpoint, the office should commit to a date and devote the resources to meeting that date. We will continue to monitor DOD’s efforts to develop the guidance. Although DOD concurred with our second recommendation, we revised this recommendation based upon discussions with DOD officials during the agency comment period. Our revision clarified the intent of this recommendation, which is to have the Secretary of Defense designate a specific organization within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics to provide advice and guidance on competitive prototyping and preliminary design reviews. We understand that the department has issued Reform Act implementation guidance and has incorporated aspects of competitive prototyping and preliminary design reviews in the Defense Acquisition Guidebook. Further, we recognize that program offices we visited are taking steps to implement the guidance that has already been issued. However, based on our discussions with senior level officials, we believe one or more offices need to be designated with the responsibility of developing additional guidance and answering program specific questions related to the practical implementation of the requirements. As noted earlier in our report, some officials questioned when to use prototyping or what type of prototyping should be used. In addition, there were questions about the timing of the preliminary design reviews. DOD partially concurred with our third recommendation. DOD noted that the type of information we recommended be assessed and reported on should be included as part of DOD’s human capital strategic planning process and as such, be reported in DOD’s annual Acquisition Workforce Strategic Plan. We agree that the impact of budget cuts on the workforce and the status of leadership positions could be addressed in the annual strategic plan. However, we continue to believe that the Deputy Assistant Secretaries for Systems Engineering and Developmental Test and Evaluation should include an assessment in their joint annual report to the Congress on the respective offices’ ability to perform activities specified in the Reform Act with available resources. DOD concurred with our fourth recommendation, which would make lessons learned from root cause analyses available to the acquisition workforce. We are sending copies of this report to the Secretary of Defense and appropriate Congressional Committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Cheryl Andrew, Assistant Director; Laura Greifner, Julie Hadley, Megan Porter, Rae Ann Sapp, and Marie Ahearn. This report examines DOD’s continued implementation of the Weapon Systems Acquisition Reform Act of 2009 (Reform Act). Specifically, we examined (1) DOD’s progress in implementing Reform Act provisions; (2) the impact the Reform Act has had on specific acquisition programs; and (3) challenges remaining in improving the weapons acquisition process. To assess DOD’s progress in implementing Reform Act provisions, we interviewed officials and analyzed documents, such as reports to the Congress and guidance issued from the Office of the Secretary of Defense (OSD) offices of the (1) Deputy Assistant Secretary of Defense for Systems Engineering (SE), (2) Deputy Assistant Secretary of Defense for Developmental Test and Evaluation (DT&E), (3) Cost Assessment and Program Evaluation (CAPE), and (4) Performance Assessments and Root Cause Analyses (PARCA) to determine the extent to which provisions have been implemented. We focused our review on the offices’ implementation of four fundamental Reform Act provisions: developing policy and guidance; approving acquisition documents; monitoring programs and conducting program assessments; and developing performance measures. In cases where provisions had not been implemented, we asked officials about the reasons for the delay and the expected time frame for completion. We also interviewed officials and analyzed documents from the office of the Assistant Secretary of Defense for Research and Engineering and the Defense Procurement and Acquisition Policy office, as well as four weapon acquisition programs, which had not yet started development to determine the progress DOD has made implementing Reform Act provisions related to preliminary design reviews, competitive prototyping, and competition. We believe these programs offer the best glimpse at how the OSD offices and Reform Act policies are influencing acquisition strategies. The weapon acquisition programs we chose for this analysis were part of a larger case study review that is described below. To determine the impact the Reform Act has had on specific weapon acquisition programs, we selected 11 weapon system programs to use as case studies. For each program we reviewed relevant program documentation such as the test and evaluation master plans, assessments of operational test readiness, systems engineering plans, program support reviews, root cause analyses, analysis of alternatives reports and cost estimates as applicable. We also interviewed appropriate program officials and officials from the OSD offices for SE, DT&E, and CAPE to obtain their perspectives about (1) the level of interaction between the programs and OSD offices; (2) changes made to program acquisition strategies as a result of interactions with the OSD offices; and (3) benefits and challenges with implementing Reform Act provisions on each of the programs. We also reviewed the performance assessments and root cause analyses office’s root cause analysis documentation for programs that incurred Nunn-McCurdy cost or schedule breaches. We selected our case studies based on input from the officials in the OSD offices for SE, DT&E, CAPE, PARCA, and operational test and evaluation. We also discussed possible case studies with GAO employees who monitor and report on weapon acquisition programs on an annual basis. The programs we selected for review represent a variety of platforms, including sea vessels, manned and unmanned aircraft, and land systems. Specifically, we examined 11 programs at various stages of the acquisition process. Four programs had not yet passed Milestone B, development start, at the time we began our review. The remaining seven programs had completed their Milestone B review and were in development at the time of our case study selection. Of the seven programs, three have breached Nunn-McCurdy cost thresholds since the act was passed and have had to satisfy the act’s new requirements with regards to certification. The other programs had significant interaction with one or more of the OSD offices established by the Reform Act. A complete list of programs is provided below. While our sample of 11 case studies allowed us to learn about the impact the Reform Act offices have had on DOD acquisitions, it was designed to provide anecdotal information, not findings that would be representative of all the department’s weapon acquisition programs. To determine challenges remaining in improving defense acquisitions we relied on information we collected and analyzed during our case study review of 11 weapon acquisition programs. We also solicited the opinions of the Under Secretary of Defense for Acquisition, Technology and Logistics; other senior level officials in the Office of the Secretary of Defense including the leaders of each of the offices created as a result of the Reform Act; as well as the military services’ Senior Acquisition Executives. We conducted this performance audit from January 2012 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Progress of Reform Act Offices in Implementing Weapon System Acquisition Reform Act Provisions Status Complete: Issued a reliability, availability, and maintainability Directive-Type Memorandum, a development planning Directive-Type Memorandum, and a DOD Instruction for DASD (Systems Engineering), participated in JCIDS revisions, developed guidance for incorporating systems engineering into development contracts, streamlined the Systems Engineering Plan and Program Protection Plan, and released an update to the Defense Acquisition Guidebook, chapter 4. Continue to refine policies and guidance as necessary. Completing on annual basis: Approved 52 Systems Engineering Plans since 2009, including 15 in fiscal year 2011. Completing on annual basis: Review portfolio of 234 programs. In fiscal year 2011, participated in 73 overarching integrated product team meetings, 6 peer reviews of acquisition contracts, and conducted 15 Program Support Reviews. Complete: Developed a set of time-based metrics to assess each program’s ability to execute its system engineering plans and address risks the office had identified in prior reviews. The metrics measure program cost, schedule, staffing, reliability, availability and maintainability, software, integration, performance and manufacturing, and are to be incorporated into each program’s systems engineering plan and evaluated at various points in the development process. Status Complete: Updated guide for incorporating test and evaluation into acquisition contracts. Championed updates to DOD Instruction assigning responsibilities and authorities to Developmental Test and Evaluation office, which is in the process of being updated. Updated guidance to include reliability factors in the Test and Evaluation Master Plan. Continue to refine policies and guidance as necessary. Completing on annual basis: Reviewed and approved 186 Test and Evaluation Master Plans since 2009, including 44 in fiscal year 2011. Monitor programs / conduct assessments Completing on annual basis: Review portfolio of nearly 250 programs. In fiscal year 2011, participated in 22 defense acquisition board meetings and 59 overarching integrated product team meetings. The office has also conducted 16 Assessment of Operational Test Readiness reviews since 2009. In process: Piloted performance measures on two programs. The measures were then updated and are being applied to over 40 programs that were selected for reporting in the fiscal year 2012 joint annual report. The assessments are being used to support the write- up of the program engagement section. Status In process: Issued first policy document in May 2012, which is the basis for additional policy documents. Updating its Operating and Support Cost Estimating Guidebook, which will address the Reform Act requirement for DOD to issue guidance related to full consideration of life cycle management and sustainability costs in major defense acquisition programs. Not applicable: The Reform Act does not require that the office approve acquisition documents. Monitor programs / conduct assessments Completing on annual basis: Conducted independent cost assessments for Milestone A and B certification on 30 future and current major defense acquisition programs, since 2009, including 8 in fiscal year 2011. The office conducted 3 Milestone C and 3 Nunn- McCurdy certification reviews in fiscal year 2011. Not applicable: The Reform Act does not require that the office develop performance measures. Status In process: Developing guidance to assist offices in conducting root cause analyses. The guidance for conducting performance assessments is expected to be released in early fiscal year 2013. Not applicable: The Reform Act does not require that the office approve acquisition documents. Monitor programs / conduct assessments Completing on annual basis: Completed 14 Root Cause Analyses for programs which have undergone a Nunn-McCurdy breach or were requested by OSD and has completed 26 semi-annual follow-up reports on these programs. Providing OSD with the execution status of DOD’s portfolio of acquisition programs through the Defense Acquisition Executive Summary process. In process: Utilize Defense Acquisition Executive Summary information to identify cost performance, schedule, funding, and technical performance issues on Major Defense Acquisition Programs. Continuing to develop performance measures. | For the past 3 years, DOD has been implementing the Reform Act requirements which are aimed at helping weapon acquisition programs establish a solid foundation from the start. This helps to prevent cost growth, thus helping the Defense dollar go further. This is the third in a series of GAO reports on the Reform Act. GAO was asked to determine (1) DOD's progress in implementing Reform Act provisions; (2) the impact the Reform Act has had on specific acquisition programs; and (3) challenges remaining. To do this, GAO analyzed documents and interviewed officials from the four OSD offices created as a result of the Reform Act, other DOD offices, the military services, and 11 weapon acquisition programs we chose as case studies. Case study programs were selected based on their development status and interaction with the four OSD offices. Results cannot be generalized to all DOD weapon acquisition programs. The Department of Defense (DOD) has taken steps to implement fundamental Weapon Systems Acquisition Reform Act of 2009 (Reform Act) provisions, including those for approving acquisition strategies and better monitoring weapon acquisition programs. DOD is also continuing to take additional steps to strengthen policies and capabilities. Some provisions, such as issuing guidance for estimating operating and support costs, are being implemented. GAO's analysis of 11 weapon acquisition programs showed the Reform Act has reinforced early attention to requirements, cost and schedule estimates, testing, and reliability. For example, prior to starting development, an independent review team raised concerns about the Ground Combat Vehicle program's many requirements and the risks associated with its 7-year schedule. Subsequently, the Army reduced the number of requirements by about 25 percent and prioritized them, giving contractors more flexibility in designing solutions. In addition, the developmental test and evaluation office--resulting from the Reform Act--used test results to help the Joint Light Tactical Vehicle program develop a more realistic reliability goal and a better approach to reach it. While DOD has taken steps to implement most of the fundamental Reform Act provisions, some key efforts to date have been primarily focused on DOD's largest weapon acquisition programs. DOD faces five challenges--organizational capability constraints, the need for additional guidance on cost estimating and Reform Act implementation, the uncertainty about the sufficiency of systems engineering and developmental testing resources, limited dissemination of lessons learned, and cultural barriers between the Office of the Secretary of Defense (OSD) and the military services--that limit its ability to broaden the Reform Act's influence to more programs. Service officials believe additional guidance is needed to improve their cost estimates and other implementation efforts. They also believe that lessons learned from programs that experience significant cost and schedule increases should be shared more broadly within the acquisition community. These challenges seem straightforward to address, but they may require more resources, which have been difficult to obtain. Ensuring the services have key leaders and staff dedicated to systems engineering and developmental testing activities, such as chief engineers at the service level and technical leads on programs, as well as breaking down cultural barriers are more difficult to address. They will require continued monitoring and attention by the Under Secretary for Acquisition, Technology and Logistics, service acquisition executives, and offices established as a result of the Reform Act to address. GAO recommends DOD develop additional cost estimating and Reform Act implementation guidance; make lessons learned available to the acquisition community; and assess the adequacy of the military services' systems engineering and developmental testing workforce. DOD generally concurred with the recommendations. GAO clarified one recommendation to make it clear that DOD needs to designate an office(s) within the Acquisition, Technology and Logistics organization to provide practical Reform Act implementation guidance to program offices. |
Under the Federal Food, Drug, and Cosmetic Act, FDA is responsible for ensuring that medical devices are reasonably safe and effective before they go to market (premarket) and that marketed device products remain safe (postmarket). Two FDA centers, CDRH and CBER, are responsible for reviewing applications to market medical devices. CDRH reviews applications for the majority of these devices, such as artificial hearts, dialysis machines, and radiological devices. CBER reviews applications for devices used in the testing and manufacture of biological products, including diagnostic tests intended to screen blood donors (such as for the human immunodeficiency virus), as well as therapeutic devices used in cell and gene therapies. FDA also inspects manufacturers’ establishments to assess compliance with good manufacturing practices (GMP). During these inspections, FDA investigators examine manufacturing facilities, records of manufacturing processes, and corrective action programs. Nine types of applications for medical devices and biological products are subject to the MDUFMA performance goals established by the Secretary of Health and Human Services for fiscal years 2005 or 2006: Original Premarket Approval (PMA) applications are generally required when the device is new or when the risks associated with the device are considerable (as would be the case if the device is to be implanted in the body for life-supporting purposes). Expedited PMAs are used when FDA has granted priority status to an application to market a medical device because it is intended to treat or diagnose a life-threatening or irreversibly debilitating disease or condition and to address an unmet medical need. Premarket Reports are applications required for high-risk devices originally approved for a single use (that is, use on a single patient during a single procedure) that a manufacturer has reprocessed for additional use. Premarket Notifications, or 510(k)s, are applications used when the intent is to market a type of device that may be substantially equivalent to a legally marketed device that was not subject to premarket approval. Panel-Track Supplements are applications used to supplement approved PMAs or Premarket Reports. These supplements typically request approval of a significant change in the design or performance of a device, or for a new purpose for using a device. 180-Day PMA Supplements are also used to supplement approved PMAs or Premarket Reports. These supplements typically request approval of a significant change in aspects of a device, such as its design, specifications, or labeling, when demonstration of reasonable assurance of safety and effectiveness either does not require new clinical data or requires only limited clinical data. Biologics license applications (BLA) request permission to introduce and license biological products into interstate commerce. There are two types of BLAs that are tied to MDUFMA performance goals. Priority BLAs are for products that would, if approved, involve a significant improvement in the safety or effectiveness of the treatment, diagnosis, or prevention of a serious or life-threatening disease. Nonpriority BLAs are considered standard BLAs. BLA Supplements are used to supplement approved BLAs by requesting approval of a change to a licensed biological product. When the change has the substantial potential to affect the safety or effectiveness of the product, FDA approval is required prior to product distribution. There are MDUFMA performance goals linked to three types of BLA supplements— BLA manufacturing supplements that require prior approval and two types of BLA efficacy supplements. Manufacturing supplements that require prior approval address proposed changes in the manufacture of the biologic and generally do not require submission of substantive clinical data. Efficacy supplements include both standard and priority efficacy supplements and require submission of substantive clinical data. BLA Resubmissions and BLA Efficacy Supplement Resubmissions are used to respond to a letter from FDA indicating that the information included in a BLA or BLA Efficacy Supplement was deficient. FDA classifies these resubmissions into two groups according to the type of information they provide. For Class 1 resubmissions, the new information may include matters related to product labeling, safety updates, and other minor clarifying information. For Class 2 resubmissions, the new information could warrant presentation to an advisory committee or a reinspection of the manufacturer’s device establishment. Each of the 2005 and 2006 MDUFMA performance goals are linked to actions FDA takes under one of three processes for reviewing medical device applications: the PMA review process, the 510(k) review process, and the BLA review process. Under the PMA review process, FDA reviews applications for new devices or those for which risks associated with the device are considerable. Applications reviewed under this process include Original PMAs, Expedited PMAs, Premarket Reports, Panel-Track Supplements, and 180- Day PMA Supplements. After an initial screening of an application and determination that the review should proceed, FDA multidisciplinary staff conduct a scientific review of the application. (See fig. 1.) If FDA determines that it needs significant additional information to complete its scientific review, FDA issues a “major deficiency letter” to the manufacturer identifying the information that is required. The manufacturer can respond to FDA’s request by submitting an amendment to the original application. FDA then proceeds with its review of the amended application. FDA can issue additional major deficiency letters and review additional amendments until FDA determines that it has sufficient information to make a decision. As part of its review, FDA may refer applications to an external advisory committee for evaluation. FDA takes this step when a device is the first of its kind or when the agency believes it would be useful to have independent expertise and technical assistance to properly evaluate the safety and effectiveness of the device. For applications referred to an advisory committee, the committee provides input to FDA on the safety and effectiveness of the devices. Taking the committee’s input into consideration, FDA then makes a decision. FDA may make one of five decisions. FDA may (1) issue an order approving the application, which allows the manufacturer to begin marketing the device; (2) send the manufacturer an “approvable” letter pending a GMP inspection, which indicates that FDA should be able to approve the device after the agency determines that the manufacturer’s device establishment is in compliance with GMP requirements; (3) send the manufacturer an approvable letter indicating that the agency should be able to approve the device if the manufacturer can make minor corrections or clarifications to the application; (4) issue a “not approvable” letter informing the manufacturer that FDA does not believe that the application can be approved because the data provided by the manufacturer do not demonstrate that the device is reasonably safe and effective; or (5) issue an order denying approval of the application, which informs the manufacturer that the agency has completed its scientific review, identified major safety or effectiveness problems, and decided not to approve the application. Two of these possible decisions result in issuance of letters indicating that an application has informational deficiencies—approvable letters requesting minor corrections or clarifications and not approvable letters. The manufacturer can respond to these letters by submitting an amendment to the original application. FDA then reviews the amendment. FDA can issue additional letters indicating that information is deficient and review additional amendments until FDA determines that it has sufficient information to determine whether to approve or deny the application. For example, if FDA determines that a manufacturer’s amendment to an approvable letter requesting minor corrections or clarifications does not address all of FDA’s questions, then FDA can issue another approvable letter pending minor corrections or clarifications or a not approvable letter. Under the 510(k) review process, FDA reviews applications to market a device that may be substantially equivalent to a legally marketed device that was not subject to premarket approval (see fig. 2). FDA staff conduct a scientific review of the application. When a 510(k) application lacks information necessary for FDA to reach a decision, the agency may issue an “additional information” letter that indicates that the information is insufficient. The manufacturer may then submit additional information. Once FDA has obtained sufficient information from the manufacturer, FDA may make one of three decisions: FDA may decide that (1) the device is substantially equivalent and therefore may be marketed, (2) the device is not substantially equivalent and may not be marketed, or (3) a 510(k) application was not required because the product is not regulated as a device or the device is exempt from the requirements for premarket notification. Under the BLA review process, FDA determines whether to approve licenses for biological products (see fig. 3). Applications reviewed under this process include BLAs, BLA Supplements, BLA Resubmissions, and BLA Supplement Resubmissions. After an initial screening of an application and determination that the review should proceed, staff conduct a multidisciplinary scientific review of the application. As part of its review, FDA may refer applications to an external advisory committee. After reviewing the application and taking into consideration any input from an external advisory committee, FDA may make one of two decisions. FDA may issue (1) an approval letter or (2) a “complete response” letter, which informs the manufacturer of deficiencies in the information provided in the application. The manufacturer can provide the information specified in a “complete response” letter in a BLA Resubmission or BLA Supplement Resubmission. The MDUFMA performance goals specify a length of time for taking an action during the review process, which can include making a decision. The goals designate a certain percentage of these actions that must occur within the specified period for FDA to meet the performance goals. To assess its performance against the MDUFMA performance goals, FDA measures the time the agency takes to complete certain actions and make decisions—but not the time it takes a manufacturer to respond to a letter from FDA. The data for measuring FDA’s performance against a specific fiscal year’s MDUFMA performance goals are based on all the applications the agency received in that year, known as a cohort, and are not complete until all applicable actions have been taken. As a result, data are preliminary until FDA has completed all actions tied to the goal for all applications in a cohort—a process that, for PMAs, can take up to 3 or 4 years. For example, one performance goal established for fiscal year 2005 is tied to amendments to PMAs that are submitted in response to major deficiency or not approvable letters. Data on FDA’s performance on this goal will not be complete until after FDA has issued all major deficiency and not approvable letters it decides to issue for applications received in fiscal year 2005 and then either (1) received, reviewed, and acted on all amendments submitted in response or (2) determined that manufacturers have withdrawn their applications. For fiscal year 2005, FDA is to meet 20 performance goals and for fiscal year 2006 FDA is to meet an additional 6 performance goals, for a total of 26. (See table 1.) The percentage of applications for which the action must be taken within the specified time frame is higher in fiscal year 2006 than in fiscal year 2005 for 16 of the performance goals that are applicable for both years. The limited data available indicate that FDA has been meeting some MDUFMA performance goals established for fiscal year 2005. It is uncertain, however, whether FDA will ultimately meet the fiscal year 2005 performance goals once reviews for all the applications are complete. We found that FDA met most of the MDUFMA fiscal year 2005 performance goals for which there were sufficiently complete data to measure the agency’s performance. When FDA did not have sufficiently complete data to evaluate performance against a MDUFMA performance goal, we reviewed preliminary data and found that FDA took actions tied to most of these other fiscal year 2005 goals within specified time frames. Data from the first 6 months of fiscal year 2005 are not sufficiently complete to evaluate FDA’s performance against MDUFMA performance goals because some applications are pending review and because manufacturers are likely to submit additional applications and amendments for review. Our analysis shows that FDA met most of the MDUFMA 2005 performance goals for which there were sufficiently complete data to measure performance (see fig. 4). These data were from applications that FDA received in fiscal years 2003 and 2004 and were used to measure the agency’s performance against about half of the performance goals established for fiscal year 2005. As of March 31, 2005, FDA had sufficiently complete data from applications received in fiscal year 2003 to measure performance against 11 of the 20 goals established for fiscal year 2005. FDA met 9 of those 11 goals and did not meet 2 of them. For applications received in fiscal year 2004, FDA had sufficiently complete data to measure performance against 10 of the 20 goals. It met 9 and did not meet 1 of these goals. For example, one of FDA’s 2005 performance goals requires the agency to issue a first major deficiency letter within 150 days for 75 percent of PMAs, Panel-Track Supplements, and Premarket Reports that the agency received during the fiscal year and found to be incomplete. For applications in the fiscal year 2003 and 2004 cohorts, respectively, FDA issued 22 of 26 (85 percent) and 23 of 28 (82 percent) first major deficiency letters within 150 days, thus meeting the goal. FDA had complete data on its performance against this performance goal from both the fiscal year 2003 and 2004 cohorts—there were no other applications that FDA received during these years for which a first major deficiency letter can be issued. Figure 4 also shows that FDA had sufficiently complete data on applications received in both fiscal years 2003 and 2004 on 2 performance goals established for fiscal year 2005 that are tied to 510(k) applications, the type of MDUFMA-related medical device application that FDA receives most frequently. These data indicate that FDA met 1 of the 2 goals with applications received in fiscal year 2003 and met both goals for applications received in fiscal year 2004. Sufficiently complete data were also available on applications received in fiscal years 2003 and 2004 to evaluate FDA’s performance on 3 of the 2005 performance goals tied to 180-Day PMA Supplements, the type of MDUFMA-related application that FDA receives second most frequently. FDA met 2 of these 3 goals on applications received in 2003 and met the 3 goals on applications received in 2004. As figure 4 shows, FDA’s data from applications received in fiscal years 2003 and 2004 and the first 6 months of fiscal year 2005 are not sufficiently complete to evaluate the agency’s performance against some fiscal year 2005 goals. The preliminary data available on these goals suggest that when FDA took actions tied to fiscal year 2005 performance goals, it generally did so within specified time frames. As of March 31, 2005, FDA had preliminary data from applications received in fiscal year 2003 on 7 of the 9 performance goals for fiscal year 2005 for which data were not sufficiently complete to evaluate performance. FDA took actions tied to 5 of the 7 goals within the specified time frames. For applications received in fiscal year 2004, FDA had preliminary data for 7 of the 10 performance goals for which data were not sufficiently complete, and the agency took actions tied to these 7 goals within the specified time frames. FDA also had preliminary performance data from applications received in the first 6 months of fiscal year 2005 for 11 of the 20 goals. FDA took actions tied to these 11 goals within the specified time frames. These preliminary results could change as FDA completes its review of pending applications and additional applications or amendments. For example, one of FDA’s 2005 performance goals for expedited PMAs was to take action within 170 days for 70 percent of amendments containing complete responses to a major deficiency or not approvable letter. As of March 31, 2005, FDA had taken action within 170 days on two of two such amendments (100 percent) to applications in the fiscal year 2003 cohort and four of five (80 percent) in the fiscal year 2004 cohort and had received no such amendments for applications in the fiscal year 2005 cohort. These preliminary performance results could change, however, if manufacturers submit additional amendments to applications in any of the three cohorts. Based on the limited data that were available as of March 31, 2005, it is unclear whether or to what extent FDA will meet the fiscal year 2005 MDUFMA performance goals because the agency’s performance could change as the agency completes its review of applications. For example, some applications are pending review because FDA has not reached a decision about the application or because the manufacturer has not responded to a letter from FDA indicating that the application included insufficient information for FDA to complete its review. Our analysis shows that as of March 31, 2005, about half of the applications FDA had received during the first 6 months of fiscal year 2005—831 of 1,792—were pending. (See table 2, which also shows the number of pending applications from the fiscal year 2003 and 2004 cohorts.) The percentage of pending applications varied by application type. For example, for the fiscal year 2005 cohort, 22—95.7 percent—of 23 PMAs and Panel-Track Supplements were pending further action, while 4—33.3 percent—of 12 BLA Supplements were pending. As previously noted, FDA’s preliminary performance results could also change if manufacturers submit additional applications or amendments, as is likely. For example, FDA received 1,703 510(k) applications during the first 6 months of fiscal year 2005, about half the number it received in each of the 2 preceding full fiscal years (3,805 and 3,432 for fiscal years 2003 and 2004, respectively). These data suggest that as of March 31, 2005, FDA had received about half of the 510(k) applications that it may receive in fiscal year 2005. Similarly, performance results for applications FDA received in fiscal years 2003, 2004, and 2005 could change as manufacturers respond to requests for additional information or submit amendments to their applications. For example, as of March 31, 2005, FDA had issued letters requesting additional information for 659 of the 510(k) applications it received during the first 6 months of fiscal year 2005. It is likely that FDA will receive responses to these requests from manufacturers. The limited data available on FDA’s performance suggest that FDA is likely to meet some of its fiscal year 2006 performance goals. Our analysis of FDA’s performance for applications received in fiscal years 2003 and 2004 shows that FDA has been meeting most of the MDUFMA 2006 performance goals for which it had sufficiently complete data. We also reviewed FDA’s preliminary data from applications received in fiscal years 2003 and 2004 and the first 6 months of fiscal year 2005, and found that FDA took actions tied to most of the remaining fiscal year 2006 goals within specified time frames. Preliminary performance results could change as the agency completes actions for applications received in fiscal years 2003, 2004, and 2005 and FDA’s performance could change as it receives applications in fiscal year 2006. FDA has taken several steps to help meet the MDUFMA performance goals. Our analysis of FDA’s past performance shows that FDA met most, but not all, of the MDUFMA 2006 performance goals for which it had sufficiently complete data. (See fig. 5.) As of March 31, 2005, FDA had sufficiently complete data from applications received in fiscal year 2003 to measure performance against 14 of 26 goals established for fiscal year 2006. FDA met 12 of those 14 goals. FDA also had sufficiently complete data from applications received in fiscal year 2004 to measure performance against 12 performance goals and met 9 of those 12 goals. Figure 5 also shows that FDA had sufficiently complete data from both fiscal years 2003 and 2004 on 2 performance goals established for fiscal year 2006 that are tied to 510(k) applications, the type of MDUFMA-related medical device application that FDA receives most frequently. These data indicate that FDA met 1 of the 2 goals for applications received in both fiscal years 2003 and 2004. Sufficiently complete data were available for applications received in fiscal years 2003 and 2004 to evaluate performance on 3 of the 2006 performance goals tied to 180-Day PMA Supplements, the type of MDUFMA-related application that FDA receives second most frequently. FDA met 2 of these 3 goals on applications received in both fiscal years. Figure 5 also shows that preliminary performance data from applications received in fiscal years 2003 and 2004 and the first 6 months of fiscal year 2005 indicate that FDA took actions tied to most of the remaining fiscal year 2006 performance goals within specified time frames. Of 12 performance goals for which data on applications received in fiscal year 2003 were not sufficiently complete to evaluate performance, FDA had preliminary data on 7. FDA took actions tied to 5 of these 7 goals within the specified time frames. Of 14 performance goals for which FDA did not have sufficiently complete data from applications received in fiscal year 2004, FDA had preliminary data for 8 and took actions tied to these 8 goals within the specified time frames. FDA had preliminary data from applications received in the first 6 months of fiscal year 2005 for 13 of the 26 goals established for fiscal year 2006. FDA took actions tied to these 13 goals within the established time frames. These performance results could change as the agency completes actions for applications received in fiscal years 2003, 2004, and 2005 and FDA’s performance could change as it receives applications in fiscal year 2006. In general, when sufficient data indicated that FDA’s performance results for applications received in a fiscal year met the performance goal established for fiscal year 2005, then the agency also met the performance goal established for fiscal year 2006, even when the 2006 goal required FDA to take action within specified time frames on a greater percentage of applications. There were two exceptions that involved issuing not approvable letters as a first action on 180-Day PMA Supplements received in fiscal year 2004 and issuing additional information letters as a first action for 510(k)s received in fiscal year 2004. In each of these cases, FDA met the performance goal established for fiscal year 2005, but did not meet the goal established for fiscal year 2006. To help meet its MDUFMA performance goals, FDA has taken several steps consistent with those outlined by the Secretary of Health and Human Services in his November 2002 letter establishing those goals. For example, FDA issued additional guidance to manufacturers on topics related to medical device applications in fiscal year 2004 and 2005. To help implement MDUFMA, CDRH hired 55 new staff (such as medical officers, scientists, and engineers) in fiscal year 2004 and 44 new staff in fiscal year 2005. According to FDA, prior to the enactment of the Medical Device User Fee Stabilization Act of 2005, there was uncertainty about the continuation of the MDUFMA program, and as a result, most of these new employees were hired on a temporary basis. Moreover, CDRH instituted a hiring freeze for MDUFMA-related positions in May 2005. FDA also said that as a consequence of hiring fewer personnel than planned to perform tasks associated with the MDUFMA program, implementation of improvements FDA intended to make was constrained. For example, fewer new guidance documents were drafted, fewer existing guidance documents were updated, and the modernization of data systems proceeded at a slower pace than FDA intended. An FDA spokesman told us that CDRH may lift its freeze on hiring new staff by the start of fiscal year 2006. In written comments on a draft of this report, FDA concurred with our findings. FDA also provided clarifying technical comments, which we incorporated. FDA’s comments are reprinted in appendix I. We are sending copies of this report to the Secretary of Health and Human Services and the Acting Commissioner of FDA, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-7119 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, James McClyde, Assistant Director, and Kristen Joan Anderson made key contributions to this report. | The Food and Drug Administration (FDA) reviews applications from manufacturers that wish to market medical devices in the United States. To facilitate prompt approval of new devices and clearance of devices that are substantially equivalent to those legally on the market, the Congress passed the Medical Device User Fee and Modernization Act of 2002 (MDUFMA). The act authorizes FDA to collect user fees from manufacturers and, in return, requires FDA to meet performance goals tied to the agency's review process. These goals are linked to certain actions FDA may take during the application review process. The goals specify lengths of time for taking these actions and the percentage of actions the agency is to take within specified time frames. MDUFMA requires GAO to report on whether FDA is meeting performance goals established by the Secretary of Health and Human Services for fiscal year 2005 and whether FDA is likely to meet the goals established for fiscal year 2006. GAO analyzed data provided by FDA that are based on actions taken on applications FDA received from October 1, 2002, through March 31, 2005. GAO used FDA's performance on applications received in fiscal years 2003 and 2004 as an indicator of the agency's likely performance. Limited available data indicate that FDA has been meeting some MDUFMA performance goals established for fiscal year 2005. It is uncertain, however, whether FDA will meet all of the goals. FDA met most of the MDUFMA 2005 performance goals for which data were sufficiently complete to measure the agency's performance. As of March 31, 2005, FDA had sufficiently complete data from applications received in fiscal year 2003 to measure performance against 11 of the 20 goals established for fiscal year 2005. FDA met 9 of those 11 goals. For applications received in fiscal year 2004, FDA had sufficiently complete data to measure performance against 10 goals and met 9 of them. When FDA did not have sufficiently complete data to evaluate performance, GAO reviewed preliminary data from applications received in fiscal years 2003, 2004, and 2005. These data suggest that FDA has taken actions tied to many of the fiscal year 2005 goals within specified time frames. These data are preliminary because some applications from each year were pending within the review process and FDA could receive and act on additional applications or amendments to applications. For example, as of March 31, 2005, about half of the applications FDA had received in fiscal year 2005 were pending action by FDA or responses from manufacturers. Because FDA's performance against the MDUFMA performance goals is based on the percentages of actions the agency takes on applications within required time frames, FDA's performance results could change as the agency completes actions on all applications and amendments for which the performance goals apply. The limited data available on FDA's performance suggest that FDA is likely to meet some fiscal year 2006 performance goals. GAO's analysis of FDA's past performance shows that FDA met most of the MDUFMA 2006 performance goals for which it had sufficiently complete data to evaluate its performance. As of March 31, 2005, FDA has sufficiently complete data from applications received in fiscal year 2003 to measure performance against 14 of 26 goals established for fiscal year 2006. FDA met 12 of those 14 goals. FDA also had sufficiently complete data from applications received in fiscal year 2004 to measure performance against 12 performance goals and met 9 of those 12 goals. GAO also reviewed preliminary data from applications FDA received in fiscal years 2003, 2004, and 2005 and found that FDA took actions tied to many of the fiscal year 2006 goals within specified time frames. Most of these results are preliminary, however, and FDA's performance could change as the agency completes actions for applications received in fiscal years 2003, 2004, and 2005 and receives applications in fiscal year 2006. FDA concurred with GAO's findings. |
Every satellite has a bus and payload. The bus is the body of the satellite. It carries the payload and is composed of a number of subsystems, like the power supply, antennas, telemetry and tracking command, and mechanical and thermal control subsystems. The bus also provides electrical power, stability, and propulsion for the entire satellite. The payload—carried by the bus—includes all the devices a satellite needs to perform its mission, which differs for every type of satellite. For example, the payload for a weather satellite could include cameras to take pictures of cloud formations, while the payload for a communications satellite may include transponders to relay data such as television or telephone signals. Monitoring and commanding of the satellite payload is done to collect data or provide a capability to the warfighter. Satellite control operations are used to manage the bus and are the focus of this report. Satellite control operations essentially consist of (1) tracking— determining the satellite’s location based on position and range measurements to receive commands from the ground, (2) telemetry— collecting health and status reports which are transmitted from the satellite to the ground, and (3) commanding—transmitting signals from the ground to the satellite to control satellite subsystems. Tracking, telemetry, and commanding (TT&C) are accomplished by a network of ground stations, ground antennas, and communication links between the centers, antennas, and satellites, strategically located around the world. TT&C is essentially the same for any given satellite, based on its orbit, regardless of its mission. Payload control involves operation and control of the payload on the satellite, or managing the operations of a satellite’s mission equipment. The ground segment of satellite control is made up of various ground control centers, ground stations, and user elements. There are two kinds of satellite ground stations: control stations and tracking stations. Satellite control ground stations perform the TT&C functions to ensure that satellites remain in the proper orbit and are performing as designed, and are the stations that manage the bus. The tracking stations enable contact with the satellites through communication uplinks and downlinks. Ground stations can be tied together and can form two types of networks —shared and dedicated. A shared network can support several satellite systems, and is able to share its antennas and software among many different kinds of satellites. While not considering the payload, a shared network is primarily used for bus control and for controlling satellites that are only contacted intermittently using relatively low data rates. However, a shared network can also support functions such as launch and early orbit tracking of satellites, and telemetry and commanding of satellites that are experiencing anomalies. Examples of DOD satellite systems that are controlled by a shared network include the Defense Satellite Communications System and Ultra High Frequency Follow-On system, also a communications satellite. The AFSCN is DOD’s largest shared network. It supports national security (defense and intelligence) satellites during launch and early orbit periods, and is used to analyze anomalies affecting orbiting satellites. It also acts as a backup control system for national security satellites, even for satellites that are not routinely controlled by the AFSCN. AFSCN is comprised of three interrelated segments: (1) operational control centers that provide satellite TT&C support from launch preparation through on- orbit operations, (2) remote tracking stations that provide the space- ground link between satellites on-orbit and the AFSCN, and (3) an interconnected network of space and ground assets with communication links that provides interfaces for external users to access network data. The AFSCN has two operational control centers: a primary center located at Schriever Air Force Base, Colorado; and a secondary control node (backup center) at Vandenberg Air Force Base, California. The AFSCN also has antennas and tracking stations dispersed throughout the world. Figure 1 outlines the AFSCN and related centers, stations, and antennas. The Navy and Army also operate smaller shared satellite control networks with satellite control operations centers, antennas, and tracking stations that support several satellite programs, including: The Naval Satellite Control Network, which operates, manages, and maintains five missions through one operational control center and four remote sites. The Naval Research Laboratory satellite control network, which supports multiple classified and scientific satellite missions through one operational control center. The Army conducts payload control, transmission control, and backup platform control for two missions through five operations centers located throughout the world. In addition to the shared network, the Air Force operates a number of dedicated satellite control networks. A dedicated network operates a single satellite system, and its assets are generally not shared with other satellite systems. Dedicated networks are usually customized or tailored to their associated satellite and therefore unsupportable on the shared networks. In addition, unlike a shared network, a dedicated network often performs both bus and payload control through the same antenna. Examples of Air Force satellite systems controlled by a dedicated network include the Space Based Infrared System (SBIRS), a missile warning satellite system, and the Global Positioning System (GPS), a constellation of satellites that provides positioning, navigation, and timing data to users worldwide. These dedicated networks have 23 antennas at 10 locations around the world. Figure 2 shows the number of antennas at various sites around the world for the AFSCN, a shared network, and for several dedicated networks, such as those used by SBIRS and GPS. Although some satellites using dedicated networks require continuous contact with their ground antennas, thereby precluding those antennas from being shared with other satellite systems, other satellites need only be in contact with their ground antennas on an intermittent basis, thus being potentially compatible with a shared network. Over the past 50 years, and especially in the last decade, DOD has increasingly deployed dedicated satellite control networks in lieu of integrating them into a larger shared satellite control network. DOD is currently operating at least a dozen dedicated satellite control networks, which typically do not share assets or personnel with other dedicated or shared networks, resulting in fragmented and potentially duplicative operations and inefficiencies across its satellite control operations. While dedicated networks offer a handful of advantages to the specific satellite systems they serve, shared networks offer potential advantages DOD- wide in leveraging hardware, software, and personnel. As of February 2013, Air Force officials stated that the Air Force had not worked to move its current dedicated operations to a shared satellite control network, which could better leverage investments. Since 1992, DOD has described its largest shared network, the AFSCN, as among other things, fragmented and lacking standardization and interoperability. In addition, Air Force Space Command officials also stated that consolidation of functions and capabilities, reduction of potential duplication and improvement in interoperability at all levels, is needed. DOD’s share of satellite programs using dedicated networks has increased since 1960, as shown in figure 3. Long-standing systems, such as the Defense Support Program and Defense Satellite Communication System, were developed with their own control centers and antenna sites because existing shared networks could not accommodate them, or because the programs were determined to be better served by combining payload and satellite control operations into a single, dedicated network. As a result of these types of decisions, DOD now uses multiple dedicated networks built specifically for individual satellite programs. Some of these networks include: GPS, which is comprised of two control centers and four antenna sites SBIRS, which is comprised of three control centers and four sites which each have up to five antennas. In recent years, the Air Force has acquired and launched the Space Based Space Surveillance satellite that has its own dedicated system. In addition, plans for other future satellite acquisitions indicate likely additional dedicated networks. While these networks enable satellite control operations, they were not designed to be interoperable. As a result, they require dedicated and unshared control centers and antennas, even when sites are co-located. Figure 4 below illustrates the co-location of several antennas in the Indian and Pacific Oceans operated by the Air Force and Navy (a smaller network), but are not interoperable. Even when control centers are co-located, the configuration of the various networks as well as the organizational structure of the Air Force centers is often fragmented. For example, at Schriever Air Force Base in Colorado, 10 satellite programs are operated by eight separate satellite operations centers under the command of six separate space squadrons, or units, as shown in figure 5. DOD’s reliance on dedicated satellite control operations networks is continuing with its newest satellite system acquisitions, as well as with updates for established systems. Despite being required to conduct a cost analysis and other analyses, DOD officials managing acquisitions of new satellite systems are not required to develop a business case when deciding whether to acquire a dedicated network or to use an existing shared network. Air Force officials stated that in some cases, satellite programs are required to make their dedicated networks compatible with a potential future standard, but many programs receive waivers from that requirement. Furthermore, because dedicated networks effectively meet mission needs, the status quo is upheld without regard for cost or department-wide strategic planning for satellite control operations. New satellites in the early stages of development are already being designed to operate on dedicated networks rather than being designed with the interoperability needed for shared networks. For example, the Precision Tracking Space System, which will be part of the Missile Defense Agency’s Ballistic Missile Defense System, has been designed with a dedicated satellite control operations network. In addition, updates of existing systems, such as the third generation of GPS satellites, are also continuing with the dedicated network approach. Although dedicated networks support the unique needs of some satellite programs, not all satellite control networks need to be dedicated. According to Air Force officials, the increase in dedicated networks reflects more of a preference by satellite program managers than a need. Officials stated that program managers would rather have the large, individual budget for completing its mission, to include a satellite control operations network, and have other programs become compatible with their satellite control operations network, if necessary. By customizing satellite control operations for each satellite, program managers do not have to modify their plans to fit within a larger organizational structure, or negotiate with any other programs over the satellite control operations system, which they may need to do if they used a shared network. However, development of a dedicated network can also result in higher costs due to the unique development required, as well as for follow-on support, since the original contractor is typically the only one able to provide this proprietary or specialized support. At the same time, while shared networks offer efficiencies and lower costs, they can have other limitations, like not being able to support unique data rates and continuous contact needs. Some potential pros and cons of the two types of networks, based on our analysis, are outlined in table 1. While each dedicated network optimizes its individual operations, our analysis indicates that multiple dedicated systems are inefficient, because they increase fragmentation and the potential for duplication across DOD satellite control operations, and they are ultimately more expensive for DOD to acquire and operate. This fragmented approach requires more infrastructure and personnel than shared networks, because the dedicated networks often require unique software, separate and possibly unique hardware, and specialized training. As such, dedicated networks that require global access to their satellites will each have to install at least one control center and several ground stations, whereas a shared network could accommodate multiple programs with one control center and a set of global ground sites. Dedicated networks typically require individualized training for their operators, and therefore personnel tend to be specialized to one system, leading to potentially higher overall costs. Satellite operators learn how to use unique software that is not transferable across satellite programs, as well as unique protocols for the various dedicated networks, since they are built by several companies with no common standards. The current practice is that satellite control operators specialize in functional areas for a specific satellite program. The narrowly focused training associated with this specialization limits satellite and ground system technical knowledge, resulting in heavy reliance on “back room engineering” or experts to diagnose problems. Training people on each of these programs requires time and personnel investment. Thus, a satellite operator who transfers from one satellite program to another will likely have to be retrained because even though the tasks are similar, satellite control operations are conducted differently. Networks established to operate under common standards, or with a common control interface, would likely not need special training. The Air Force has budgeted about $400 million to modernize the AFSCN over the next five years, but the planned upgrades will do little to increase the network’s capability. These efforts are mainly focused on sustaining the network at its current level of capability, and ignore more than a decade of research recommending more significant improvements to the AFSCN. The Air Force’s approximately $400 million investment in modernizing the AFSCN over the next five years is to extend its life by replacing unsupportable equipment. According to Air Force Space Command officials, these efforts will provide minor capability upgrades that will maintain the aging system, but will not provide material improvements in service. Specifically, this modernization funding is being spent mainly on two efforts: The Electronic Schedule Disseminator (ESD) system, which schedules activities on the network, is to be upgraded from ESD 2.7 to ESD version 3.0. Version 2.7 has been the operational baseline since 1991 and operates on 1980s computer technology. The ESD upgrade will run on a Microsoft Windows operating system and commercial off the shelf (COTS) hardware. This upgrade is currently underway with a planned completion in mid-2015. The Remote Tracking Station Block Change effort is to upgrade existing electronics on the network’s ground control computers and antennas to more modern versions on a Microsoft Windows operating system. This upgrade is currently underway and the Air Force plans to have all of its stations upgraded by 2019. The AFSCN is currently using 1980s-era hardware based on the disk operating system. While not fully funded, the Air Force plans to modify the AFSCN so that it is able to operate on an additional communication frequency, or band. This upgrade is to allow the network to perform data uplinks on both the L-Band and the S-Band, to add greater flexibility and avoid potential sources of interference. While these modernization efforts are intended to improve the aging system, according to Air Force officials, these measures sustain the system at the current level of performance and do not offer a material improvement in capability. For example, some of the equipment the Air Force is replacing was so outdated that program officials had to search on an online auction site for replacement parts, because they were no longer being sold by manufacturers. Air Force officials said that one reason the new upgrades were not undertaken to provide more capability is that the requirements for the network have not changed, and the Air Force does not want to pay for capabilities above and beyond the established requirements. For example, Air Force officials cited one case where the program acquired a new piece of hardware to replace an outdated piece. The new hardware provided additional capabilities beyond what was called for in the requirements document. However, the added capabilities are being turned off so that the Air Force does not have to pay to maintain them when they are not required. Though it is prudent for programs to only pay for capabilities called for by program requirements, the overall approach of making minor changes to keep the system operating with its current capabilities may not be the best use of Air Force funds in the long-term. The Air Force’s actions are somewhat contrary to more than 15 years of government and space industry reports that recommended that the Air Force incorporate newer and more efficient technologies into the AFSCN to improve its capability. As long ago as 1994, Air Force Space Command identified the need for improved satellite control operations capabilities. It cited, among other things, aging equipment and technological opportunities as reasons for needed network upgrades. In 1999, GAO reported that DOD had made minimal progress in integrating and improving its satellite control operations capabilities in accordance with the then 1996 national space policy. More recently, in 2008, the Commander of Air Force Space Command issued a memo describing the need for increased satellite control operations efficiencies, improved interoperability, and consolidated functions. Despite these recommendations having been made over the course of almost two decades, no guidance currently exists directing the Air Force to increase the efficiency or capacity of the AFSCN. Thus, modernization efforts have continued to focus on sustaining systems at current levels of performance. A long-term plan for modernizing the network and any future shared satellite control operations networks could assist DOD in making more informed decisions about investments and whether and to what extent expand satellite control operations capabilities. Commercial satellite companies that we spoke with incorporate varying degrees of interoperability, automation, and other practices into their satellite control operations networks to decrease programs costs and increase efficiencies. According to Air Force officials, commercial practices could offer the Air Force similar benefits for routine functions. Satellite control operations officials at commercial companies also agree that there is potential for improvement if the Air Force adopts some commercial practices. Furthermore, for over 10 years, government and space industry reports have asserted that commercial practices for satellite control operations may increase the efficiency and effectiveness of government satellite control operations. Although there is ample evidence that these leading commercial practices could generate cost savings and improve efficiency, the Air Force has generally not implemented these practices. Officials from the seven commercial satellite companies that we spoke with leverage practices such as interoperability and automation to realize cost efficiencies and increase the accuracy of their satellite control operations. Because this industry is extremely competitive, these companies have been reluctant to publicize or share with us specific program costs, though they noted that since their companies are profit- oriented, they would not undertake the various commercial practices if they did not reduce costs and increase efficiency. Specifically, officials from all seven of the commercial companies we spoke with have found some or all of the practices below to be beneficial: Interoperability: Interoperable satellite control operations networks allow a single operator to control multiple satellites from one terminal, with one software interface, regardless of the satellite’s age or manufacturer. For example, one company that we spoke with develops satellite control operations software for multiple companies. One of their software programs is being used to control four satellites, each of which was made by a different contractor, and all four are of different ages. Automation: All but one of the commercial companies we spoke with use automation of routine functions, such as downloading telemetry data, which allows these companies to reduce the number of operators they need, and can reduce the risk of human errors. One commercial company we spoke with wrote software that allows its customer to leverage automation to operate a fleet of communication satellites with nearly “lights out” operations—needing only one operator at a time to control 15 satellites. Commercial-off-the-shelf (COTS) products: All but one of the commercial companies we spoke with agreed that COTS products are less expensive than custom ones and can be modified to meet each company’s needs. A number of companies we spoke with take advantage of COTS products, which are also easier to replace when needed. For example, one of the COTS satellite control operations software systems that is used by many commercial satellite operators allows a satellite to be controlled by any company that uses the same software. This can be beneficial when companies buy and sell satellites, or when a company leases out control of its satellite to another company. Hybrid network: A hybrid network arrangement allows a company to augment its ground network of antennas and control stations by leasing antenna time on another company’s network. One company we spoke with has found that using pre-existing physical assets from other providers can be less costly than building and maintaining all of the ground assets they use. While commercial satellites and Air Force satellites can greatly differ in their missions, and to some extent may differ in their need for information security, basic satellite control operations functions of most of these satellites are generally the same, allowing trusted practices from the commercial sector to be applicable to many Air Force satellite programs. Air Force satellite control officials have stated that there are opportunities for increasing efficiencies and reducing costs in Air Force satellite control operations by using these commercial practices. Officials at three of the commercial satellite operations companies we spoke with that have knowledge or experience with DOD satellite control networks, agreed with these statements. The practices mentioned above are trusted and proven in the commercial sector, and incorporating some or all of them may result in improved Air Force satellite control operations. For example: Interoperability: While implementing an interoperable infrastructure for software and hardware could have a dramatic impact on program costs, many Air Force satellites currently rely on separate interface software and hardware to control each kind of satellite. For example, when looking at basic control functions, the control interface for a communications satellite is significantly different from a positioning, navigation, and timing satellite, and even two positioning, navigation, and timing satellites may have different control interfaces, depending on when they were built and what company built them. Making future Air Force satellite programs’ satellite control operations interoperable would allow one operator to use a single terminal to control numerous satellites, similar to commercial practices. This could reduce costs associated with purchasing multiple types of software and training the operators on each system, as well as potentially reduce the number of staff required, since one person could operate multiple satellites more easily. One example put forward by an industry group study estimated that increasing interoperability and automation could allow one Air Force satellite control operations group to reduce its operations personnel by 45 percentuse common software would likely be cost-prohibitive, but it would be possible for programs under development to be designed and built utilizing standard satellite control operations software programs, allowing for greater flexibility in the future. . Retrofitting existing satellite programs to Automation: While commercial companies use computer programs to perform routine tasks, the Air Force typically uses human operators. Increasing automation for routine control functions could reduce Air Force personnel costs, and the potential for human errors. According to satellite control operations officials at Air Force Space Command, the use of automation in Air Force satellite control network is discouraged by the risk averse culture of the service. COTS products: Basic satellite control operations software programs exist on the market that could be modified to meet the Air Force’s needs, but the Air Force continues to purchase custom software solutions. Specialized software systems are usually expensive and often take longer to develop than planned. Not only are customized systems expensive to acquire, they are also proprietary to the company that developed them, requiring the Air Force to use the original contractor for any follow-up modifications to the software. Hybrid networks: Although the Air Force has selectively and sparingly used commercial networks for some satellite control operations, it has no future plans to regularly use any commercial antennas or control centers for its satellite control operations. The Air Force by necessity has very high standards for security and reliability, and Air Force officials have said that these security standards are higher than those of private sector space systems. However, officials from the commercial companies we spoke with that have used this practice told us that they have similarly high security standards to the Air Force, and have been able to effectively use hybrid networks. Both large defense contractors and space agencies in other countries use second party providers for some of their satellite control operations. Also, the National Aeronautics and Space Administration (NASA) has embraced a hybrid system for its Near Earth Network, and although it owns and operates some control stations, NASA has seen benefits from contracting out its service in other locations. According to NASA officials, there were cost avoidances associated with not building satellite tracking stations in geographical areas where mission requirements were minimal. In this case, a commercial network was able to provide the necessary capabilities to augment NASA’s network with a lower cost alternative. According to NASA officials, obtaining geographically diverse support from commercial providers enables NASA to avoid some infrastructure costs. Although NASA was unable to quantify the exact cost savings from using hybrid networks, one commercial company who provides services to NASA estimated the use of commercial networks reduced NASA’s operations and maintenance cost by about 30 percent with very low mission risk. According to Air Force Space Command satellite control operations officials, the Air Force has not yet explored this possibility. Air Force officials acknowledged that they may be missing out on an inexpensive way to improve their satellite control operations at a low cost, though said that security issues, such as handling of classified data, would have to be addressed to the Air Force’s satisfaction to make this a possibility. Government and space industry reports for over 10 years have reported that commercial practices for satellite interoperability may increase the efficiency and effectiveness of government satellite control operations, and many of these studies have recommended that DOD adopt these practices. Since 1996, a number of reports by government and industry groups have described opportunities for Air Force satellite control operations to improve their efficiency through methods such as interoperability between satellite control networks and the adoption of commercial practices. These reports generally concluded that there are numerous opportunities for improvement in Air Force satellite control operations in the near- and long-term as indicated in figure 6. Though Air Force officials, management at commercial companies, and a decade of government research agree that there are opportunities to use commercial practices in Air Force satellite control operations, the Air Force has generally not implemented these practices. Efforts to implement commercial practices have been discussed, and in some cases initial steps to initiate changes have been taken, but the Air Force has not followed through with their implementation. For example, the Air Force participates in the annual Ground Systems Architecture Workshop, where experts in the field gather to discuss ground system issues and collaborate on solutions, but according to Air Force officials, few if any of the solutions discussed have been implemented. In addition, DOD initiated the SATOPS (satellite operations) Enterprise Transformation effort in 2011 to reduce duplication, improve interoperability, enable consolidation, and move to more efficient satellite control operations. However, one of the new technologies that was planned as part of this effort, a new type of antenna, has not been proven to be cost effective, and progress appears to be stalled overall on the proposed improvements, resulting in no change to the way the Air Force conducts its satellite control operations. While opportunities exist to improve DOD satellite control operations, there are also barriers that hinder DOD’s ability to make these improvements. These barriers exist at both at the program level and at higher management levels within DOD, and include: the lack of a long- term plan for satellite control; limited insight into satellite control operations spending; no existing requirement to establish a business case for a program’s satellite control operations approach; and lack of autonomy at the program level to implement satellite control operations improvements. In particular: DOD does not have a long-term plan for satellite control operations. Several DOD officials we spoke with stated that although they believe there are efficiencies to be gained from alternative ways of performing satellite control, there is nothing prompting them do things differently. Furthermore, they stated that there is no DOD-wide guidance or long-term plan that directs or supports the implementation of alternative methods for performing satellite control operations. In addition, DOD does not have plans to transition its dedicated networks to shared networks. Instead, the agency plans to continue deploying dedicated networks, in part because it is not required to justify whether a shared or dedicated network best meets overall requirements. However, we found that there have been some plans to evolve future satellite operations centers to be more integrated and interoperable. For example, in an Air Force briefing from June 2007, plans were depicted to evolve stove-piped centers—where each satellite program procures its own TT&C system—to centers where compatible systems share TT&C services. In addition, in December 2008, the Air Force Space Command Commander issued a memorandum on its intent for an Air Force Satellite Operations Enterprise Architecture Transformation. The Commander cited fiscal realities, operational efficiencies and emerging threats as reasons for reevaluating how satellite operations, to include satellite control, are conducted for on-orbit systems. However, Air Force officials told us that other than the current, limited AFSCN modernization efforts, there are no other current plans in place to update or modernize the capabilities of the network, and at this time, according to Air Force documents, there is no end-of-life or follow-on projected for the network, either. DOD is unable to quantify all spending on satellite ground control operations across DOD programs. DOD is unable to identify all funding for satellite control operations across all DOD satellite programs. Programs have not needed to keep track of budgets by dividing satellite control operations funding out from other satellite mission funding since the focus has been on dedicated ground control networks. However, without knowing how much it spends on basic satellite control operations for all of its satellites, DOD cannot calculate the potential savings or perform a cost/benefit analysis of any future changes to satellite control operations. Each of the individual satellite programs with a dedicated ground system manages and reports that program’s satellite control operations budget separately from AFSCN funding. These budget reports do not separate the satellite control operations funding from other program funds, such as those expended on mission data. Furthermore, the way that programs are budgeted and organized makes it onerous for them to determine how much of their budget is spent on satellite control operations. It can be particularly challenging when programs use the same communication paths and staff for satellite control operations and mission data transfers, or when a satellite ground control center controls multiple systems, because program costs are accounted for as a whole for these functions. For example, the Navy’s Mobile User Objective System (MUOS) program office officials said that MUOS uses the same communications paths for both the satellite control operations and the mission data commands. As a result, determining how much of the communication time is spent on satellite control operations versus mission data would be hard to do accurately. In addition, three satellite constellations are controlled from the same Navy satellite operations center using the same people and same equipment. To determine how much time spent on each piece of equipment by each person in order to divide the funding between the three systems would be difficult to do with any degree of reliability. Similarly, the Defense Meteorological Satellite Program sends communications for both satellite control operations and mission data simultaneously over the same routes. The program utilizes commercial circuits for this, and pays for these connections under the same invoice, with no way to determine how much would be allocated to satellite control operations and to mission data. However, it may be possible to track costs associated with satellite control operations through the program’s individual work breakdown structure which is the cornerstone of every program because it defines in detail the work necessary to accomplish a program’s objectives. Establishing a work breakdown structure to track weapon system acquisition costs is a best practice because it allows a program to track cost and schedule by defined deliverables. Satellite programs are not required to present a business case when choosing to develop a dedicated network. Air Force satellite program officials are currently free to choose the satellite control network type that best suits their program without needing to justify that choice. Without the requirement to weight potential compromises in performance with potential reductions in cost, most new programs are choosing to build a dedicated network. For some satellite programs, having a dedicated satellite control operations network might still be the most efficient choice, due to unique mission requirements. However, programs are not required to conduct an analysis to determine a business case for proceeding with a dedicated or shared network, or to validate their network’s requirements Currently, the lack of cost data means that DOD cannot perform a cost-benefit analysis to determine whether the potential benefits of individual programs using dedicated networks outweighs the potential drawbacks of continued and even increased systemic fragmentation and inefficiency. Without a cost-benefit analysis, DOD has a less compelling business case for its current approach for acquiring satellite control networks and cannot strategically determine if its . While business case analyses are required for milestone B certification of major defense acquisition programs, which is approval to enter system development, there is not a specific requirement or policy to analyze whether or not to use a shared satellite control operations network. 10 U.S.C. §2366b (a); DOD Instruction 5000.02, Encl. 2, §6 (c) (5) (Dec. 8, 2008). DOD Directive Type Memorandum (DTM) 09-027, “Implementation of Weapon Systems Acquisition Reform Act of 2009,” (Dec. 4, 2009, incorporating Change 4, Jan. 11, 2013). Major defense acquisition programs are those estimated by DOD to require an eventual total expenditure for research, development, test, and evaluation of more than $365 million, or for procurement of more than $2.19 billion, including all planned increments or spirals, in fiscal year 2000 constant dollars. current options of shared and dedicated networks are the best option or whether other options, such as hybrid networks, might be better suited to meet its satellite control operational needs. GAO’s Cost Estimating and Assessment Guide states that a business case analysis or a cost-benefit analysis seeks to find the best value solution by linking each alternative to how it satisfies a strategic objective. This linkage is achieved by developing business cases that present facts and supporting details among competing alternatives, including the life cycle costs and quantifiable and non-quantifiable benefits. Specifically, each alternative should identify: (1) relative life cycle costs and benefits; (2) methods and rationale for quantifying the life cycle costs and benefits; (3) effect and value of cost, schedule, and performance trade-offs; (4) sensitivity to changes in assumptions; and (5) risk factors. DOD guidance regarding economic analysis similarly encourages the use of sensitivity analysis, a tool that can be used to determine the extent to which costs and benefits change or are sensitive to changes in key factors; this analysis can produce a range of costs and benefits that may provide a better guide or indicator than a single estimate. While historical trends show a move away from shared networks to dedicated networks, AFSCN, the largest shared network, is currently undergoing expensive efforts to sustain the networks capabilities. This is happening at the same time that DOD is planning on building new dedicated networks for upcoming satellite programs. These are large investments for DOD, and approaching them without a thorough analysis of the options for satellite control operations may lead to wasted money and missed opportunities to reduce fragmentation and increase satellite control operations efficiencies. Satellite programs do not have the autonomy to implement improvements to satellite control operations. It is difficult to implement improvements to satellite control operations practices, because changes must be initiated at a level higher than at the individual program-level. According to Air Force officials, even if an individual satellite program’s managers wanted to begin to incorporate commercial practices, they have limited flexibility to do so. We reported above that DOD allows programs to begin without establishing a business case resulting in program managers not being well-positioned to successfully execute a weapon acquisition. Program officials are required to adhere to established program requirements, and many such requirements do not allow for these improvements. For example, Air Force officials cited one case where the program acquired a new piece of hardware to replace an outdated piece. The new hardware provided additional capabilities beyond what was called for in the requirements document. However, the added capabilities are being turned off so that the Air Force does not have to pay to maintain them when they are not required. In addition, according to Air Force Space Command officials, some DOD satellite program managers prefer to have satellite control networks that are optimized for their specific mission needs and are wary of introducing alternative ways of doing business, such as automation and interoperability into satellite systems. Officials explained that concerns about information security, automation errors, and a lack of desire to change the status quo, keep programs from implementing changes that they feel might threaten their missions. Information security in particular is of concern for the Air Force. According to Air Force officials, satellite control operations are highly dependent on accurate and precise information, and security threats such as introduction of malicious code to the system or interception of sensitive data could pose significant risks to the satellite’s mission. The perception that a large upfront investment would be needed to implement new satellite operation practices likely make the status quo of dedicated networks the preferred acquisition approach going forward. In addition, the time that would be needed to develop a shared system with increased automation, for example, also likely makes the status quo more appealing. DOD’s current array of satellite control networks favor dedicated systems that have been largely shaped by past practices. Dedicated networks are the default option because they offer custom solutions for each satellite system. For this reason, there have been and will continue to be good reasons for having dedicated networks. However, opting for dedicated systems does not form a clear approach for acquiring satellite control networks that are a result of analysis, not a presumption. Given the prevalence of dedicated networks, DOD has had long-standing difficulty in effectively implementing improvements across its varied satellite control operations which has hindered its ability to achieve significant results in this area. At the moment, DOD lacks the incentive to change its current practices, in part, because it does not know the total cost associated with its satellite control operations, though it is currently spending millions on modernization efforts. Numerous studies, commercial practices, and fiscal constraints all offer compelling reasons for DOD to take a fresh look in how it designs and invests in control networks. But several barriers have maintained the status quo, to include the lack of a long-term plan, an aversion to risk, and no cost visibility. Thus, by developing a plan for modernizing its shared satellite control operations networks, DOD could be better positioned to address barriers, reduce fragmentation, and increase efficiencies. To better facilitate the conduct of satellite control operations and accountability for the estimated millions of dollars in satellite control investments, and to reduce fragmentation, we recommend that the Secretary of Defense take the following two actions: 1. Conduct an analysis at the beginning of a new satellite acquisition to determine a business case for proceeding with either a shared or dedicated satellite control system, to include its associated ground antenna network. The analysis should include a comparison of total dedicated network costs to the incremental cost of integrating onto a shared network to determine applicable cost savings and efficiencies. 2. Develop a department-wide long-term plan for modernizing its Air Force Satellite Control Network and any future shared satellite control services and capabilities. This plan should identify methods that can capture or estimate satellite control costs as well as authorities that can be given to the program managers to give them the flexibility needed to ensure ground systems are built to a common network when the business case analysis shows it to be beneficial. This plan should also identify which commercial practices, if any, can improve DOD satellite control operations in the near- and long-term, and as appropriate, develop a plan of action for implementing them. We provided a draft of this report to DOD for comment. In its written comments, DOD concurred with our two recommendations. The comments are reprinted in appendix II. DOD also provided technical comments which were incorporated as appropriate. In concurring with our recommendations, DOD agreed that efficiencies can be gained from investing in shared satellite control operations networks, with a goal of reducing duplication and improving interoperability among networks. DOD also agreed that developing a long-term, department-wide plan for modernizing satellite control operations is needed. DOD noted that both of GAO’s recommendations are similar to concepts endorsed by the Air Force, Army, and Navy, and DOD plans to initiate a comprehensive Satellite Operations Enterprise Architectural Analysis to serve as a foundation to define requirements for planning new satellite program acquisitions. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Navy, and Army; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our review focused on the ground networks used to perform satellite control operations. As such, we reviewed relevant satellite control upgrades, and sustainment and modernization efforts for the Air Force, Navy and Army to determine the potential for fragmentation or duplication. We reviewed satellite control plans, requirements, budgets, and studies associated with current and future capabilities. We developed an inventory of key satellite programs to enable a comparison of satellite control operations types, attributes, and funding. To identify any potential for fragmentation and duplication, we assessed military service investments in satellite control operations, acquisitions, and capabilities, and reviewed prior GAO work for relevant criteria. We also used the information associated with various aspects of the modernization efforts, and actual or planned satellite control operations capabilities, to assess whether any of the efforts were potentially duplicative. We analyzed documentation and interviewed officials from various offices of the Secretary of Defense, to include the Office of the Deputy Assistant Secretary of Defense for Communications, Command and Control and Cyber, Air Force, Navy, Army, and National Reconnaissance Office (NRO). We did not, however, review NRO systems, budgets, or requirements, but did obtain perspectives on satellite operations from the NRO, which could not be incorporated due to their classification. We also determined the extent to which various aspects of the modernization efforts were duplicative by reviewing briefings on satellite control operations and obtaining status updates from Air Force, Navy, and Army officials. To assess the status of modernization efforts and the costs associated with current and planned upgrades and sustainment efforts for the Air Force Satellite Control Network (AFSCN) and other services’ satellite control efforts, we reviewed the military services’ satellite operations budget documents for fiscal years 2011 through 2017. Specifically, we obtained budget documentation from the Office of the Under Secretary of Defense (Comptroller) for all 7 years. Based on our review of budget information we collected from agencies that we contacted as well as from presidential budget estimates, we determined that the AFSCN was the largest satellite control operations network within DOD and that the Air Force was responsible for most satellite control programs. We interviewed AFSCN officials and asked for an explanation and description of each planned upgrade, sustainment and modernization effort. We reviewed documentation and interviewed officials on the status, technology, and actual or planned operational characteristics. We reviewed and analyzed the budget documents and program documentation to determine how the Air Force defined and was proceeding with its modernization efforts for the AFSCN and interviewed DOD officials. We also reviewed our prior reports on satellite control operations to gain a better understanding of the progress DOD has made on its satellite control operations. To determine which commercial practices could benefit the Air Force’s satellite control operations, we conducted internet searches and attended conferences on the related subject matter to identify potential companies to participate. We then selected a nongeneralizable sample of 13 commercial companies that are known in the space community to operate satellites and have knowledge of satellite control operations and based on their satellite constellation size, orbit and capabilities. We asked the 13 companies to provide information on how they perform or build their satellite control operations and capabilities. Of the 13 companies we contacted, we received detailed information from 7. We interviewed officials from these commercial companies and reviewed documentation on their practices associated with satellite control operations and compared and contrasted them with Air Force practices. We interviewed key personnel and reviewed company data to compile a list of potential practices that could be employed by the Air Force to improve its satellite operations. Based on interviews and reviews of commercial documentation and DOD reports on satellite control operations, we determined that specific commercial practices—such as automation and commercial off- the-shelf products—may be beneficial to DOD satellite programs. It should be noted that the commercial practices used by the companies that we included in our review align with practices that other satellite industry companies, as well as DOD, have cited as beneficial for improving the effectiveness of satellite control operations. Our assessment of the applicability of satellite control operations practices adopted by commercial companies is focused primarily on unclassified DOD satellite programs and may not be applicable to classified NRO systems. To identify any potential barriers to the implementation of commercial practices, we reviewed reports and documentation, such as the National Defense Industrial Association (NDIA) 2007 Summer Study –AFSPC Satellite Operations Enterprise Assessment and briefings from the Ground Systems Architectures Workshop and interviewed officials from the organizations mentioned above. We interviewed commercial company officials with prior military service and Air Force officials, given that the Air Force Satellite Control Network is DOD’s largest satellite control network. We reviewed our prior reports to compare and contrast previous DOD efforts to improve satellite control operations. In doing so, we were able to identify if DOD had improved its operations, or if barriers persisted. We used the information to assess whether barriers had affected the funding, cost, schedule, and performance of satellite control operations. In addition to the contact named above, Arthur Gallegos, Assistant Director; Marie P. Ahearn; Maricela Cherveny; Danielle Greene; Laura Hook; Ioan Ifrim; Angela Pleasants; and Roxanna T. Sun made key contributions to this report. | DOD manages the nation's defense satellites, which are worth at least $13.7 billion, via ground stations located around the world. These ground stations and supporting infrastructure perform three primary functions: monitoring the health of the satellite; ensuring it stays in its proper orbit; (activities collectively known as satellite control operations), and planning, monitoring, and controlling the execution of the overall mission of the satellite. Based on the House Armed Services Committee Report and discussions with defense committee staff, GAO (1) reviewed the Air Force's satellite control operations to assess the potential for fragmentation or duplication, (2) assessed the status of modernization efforts, (3) identified any commercial practices that could improve the Air Force's satellite control operations, and (4) identified any barriers to implementing them. GAO reviewed modernization funding documents, related studies and interviewed DOD and 7 commercial satellite companies, from a nongeneralizable sample selected in part because of their companies' satellite capabilities. The Department of Defense (DOD) satellite control networks are fragmented and potentially duplicative. Over the past decade, DOD has increasingly deployed standalone satellite control operations networks, which are designed to operate a single satellite system, as opposed to shared systems that can operate multiple kinds of satellites. Dedicated networks can offer many benefits to programs, including possible lower risks and customization for a particular program's needs. However, they can also be more costly and have led to a fragmented, and potentially duplicative, approach which requires more infrastructure and personnel than shared operations. For example, one Air Force base has 10 satellite programs operated by 8 separate control centers. According to Air Force officials, DOD continues to acquire standalone networks and has not worked to move its current standalone operations towards a shared satellite control network, which could better leverage DOD investments. The Air Force Satellite Control Network (AFSCN), DOD's primary shared satellite control network, is undergoing modernization efforts, but these will not increase the network's capabilities. The Air Force budgeted about $400 million over the next 5 years for these efforts. However, these efforts primarily focus on sustaining the network at its current level of capability and do not apply a decade of research recommending more significant improvements to the AFSCN that would increase its capabilities. Commercial practices have the potential to increase the efficiency and decrease costs of DOD satellite control operations. These practices include: interoperability between satellite control operations networks; automation of routine satellite control operations functions; use of commercial off-the-shelf products instead of custom ones; and a "hybrid" network approach which allows a satellite operator to augment its network through another operator's complementary network. Both the Air Force and commercial officials GAO spoke to agree that there are opportunities for the Air Force to increase efficiencies and lower costs through these practices. Numerous studies by DOD and other government groups have recommended implementing or considering these practices, the Air Force has generally not incorporated them into Air Force satellite control operations networks. DOD faces four barriers that complicate its ability to make improvements to its satellite control networks and adopt commercial practices. First, DOD has no long-term plan for satellite control operations. Second, the agency lacks reliable data on the costs of its current control networks and is unable to isolate satellite control costs from other expenses. Third, there is no requirement for satellite programs to establish a business case for their chosen satellite control operations approach. And fourth, even if program managers wanted to make satellite control operations improvements, they do not have the autonomy to implement changes at the program level. Until DOD begins addressing these barriers by implementing a long-term plan for future satellite control network investments that can capture estimates of satellite control costs as well as authorities that can be given to program managers and incorporates commercial practices, the department's ability to achieve significant improvements in satellite control operations capabilities will be hindered. GAO recommends that the Secretary of Defense direct future DOD satellite acquisition programs to determine a business case for proceeding with either a dedicated or shared network for that program's satellite control operations and develop a department-wide long-term plan for modernizing its AFSCN and any future shared networks and implementing commercial practices to improve DOD satellite control networks. DOD concurred with our recommendations. |
Agencies are generally required to use full and open competition— achieved when all responsible sources are permitted to compete—when awarding contracts. However, the Competition in Contracting Act of 1984 recognizes that full and open competition is not feasible in all circumstances and authorizes contracting without full and open competition under certain conditions. Examples of allowable exceptions to full and open competition include circumstances when the contractor is the only source capable of performing the work or when disclosure of the agency’s need would compromise national security. An agency may also award a contract noncompetitively when the need for goods and services is of such an unusual and compelling urgency that the federal government faces the risk of serious financial or other injury. When using the urgency exception to competition, an agency may limit competition to the firms it reasonably believes can perform the work in the time available. However, an agency is not permitted to award a noncompetitive contract where the urgent need has been brought about due to a lack of advanced planning. Unlike the other exceptions to competition provided by the FAR, awards that use the urgency exception have certain time restrictions. Specifically, the total period of performance is limited to the time necessary to meet the requirement and for the agency to enter into another contract through the use of competitive procedures. Further, the period of performance may not exceed 1 year unless the head of the agency or appointed designee determines that exceptional circumstances apply. Generally, noncompetitive awards must be supported by written justifications that contain sufficient facts and rationale to justify use of the specific exception to competition that is being applied to the procurement. At a minimum, justifications must include 12 elements specified in the FAR, including a description of the goods and services being procured, market research conducted, and efforts to solicit offers, among other things, as shown in figure 1. For noncompetitive awards using the urgency exception, justifications may be prepared and approved within a reasonable time after award when doing so prior to award would unreasonably delay the acquisition. Justifications are to be published—on the Federal Business Opportunities website (FedBizOpps)—generally, within 30 days after contract award.Additionally, justifications must be approved at various levels within the contracting organization. These levels vary according to the estimated total dollar value of the proposed contract, including all options. As outlined in table 1, the approval levels range from the contracting officer for smaller dollar contracts up to the agency’s senior procurement executive for larger dollar contracts. The FAR has more streamlined procedures for awarding contracts under the simplified acquisition threshold—generally less than $150,000. These smaller dollar awards are exempt from the justification and documentation requirements described above for contracts over this threshold. For example, contracting officers awarding a contract under simplified acquisition procedures must only document the determination that competition was not feasible; no approval beyond the contracting officer is required. Further, agencies do not have to document the extent and nature of the harm to the government that necessitates limiting competition when using simplified acquisition procedures. When using these procedures, agencies may solicit an offer from one contractor in certain circumstances, including when the contracting officer determines that only one source is reasonably available. In general, once a contract is awarded, the awarding agency must enter certain information into FPDS-NG, the federal government’s database that captures information on contract awards and obligations. Agencies are responsible for the quality of the information entered into the database. Data captured includes, for example, the contract value, whether the contract was awarded competitively or not, and what authority was used to award the contract noncompetitively. In FPDS-NG, there are three fields for agencies to report competition data for contracts awarded: Extent competed: the competitive nature of the contract awarded, for example, whether the contract was awarded using full and open competition or not competed using simplified acquisition procedures. Solicitation procedure: the procedure an agency used to solicit offers for a proposed contract opportunity, for example, soliciting an offer from only one contractor because the agency deemed only one source available to fulfill the need or soliciting offers pursuant to simplified acquisition procedures. Other than full and open competition: the reason an award was not competed and the authority used to forego full and open competition, for example, the unusual and compelling urgency exception to competition. For the purposes of this report, we refer to this field as “the reason not competed” to be more descriptive of the content in the field. Based on data from FPDS-NG, DOD, State, and USAID obligations for contracts and task orders reported as using the urgency exception during fiscal years 2010 through 2012 were small relative to other exceptions to Of the $998 billion that DOD obligated for all full and open competition.contracts during this period, $432 billion or 43 percent were awarded noncompetitively; however, only about $12.5 billion—or about 3 percent— of DOD’s noncompetitive obligations were awarded under the urgency exception. Less than 1 percent of USAID’s noncompetitive obligations— $3.3 billion—were obligated under the urgency exception. In comparison to DOD and USAID, State’s obligations under the urgency exception were more substantial, accounting for 12.5 percent—or $582 million—of its noncompetitive obligations, as shown in figure 2. DOD’s obligations under the urgency exception accounted for more than 85 percent of the total dollars obligated using the urgency exception across the federal government from fiscal years 2010 through 2012. Among civilian agencies, State and USAID obligations using the urgency exception accounted for about 4 percent and less than 1 percent of total urgency obligations across the federal government, respectively. DOD and USAID’s obligations under the urgency exception remained relatively constant for fiscal years 2010 through 2012; however, State’s obligations in fiscal year 2011 were $301.4 million, more than a twelvefold increase over its fiscal year 2010 obligations, which totaled $24.4 million. Most of the increase can be attributed to three contracts, which altogether totaled more than 75 percent of State’s total urgency obligations in fiscal year 2011. Our analysis of FPDS-NG data showed that use of the urgency exception varied in the types of goods and services acquired across DOD, State, and USAID for fiscal years 2010 through 2012. As figure 3 shows, State and USAID obligated funds to almost exclusively procure services while DOD obligated funds to procure an equal percentage of goods and services. During the 3-year period, DOD procured a broad range of items with more than half of funds—or over $6 billion—for research and development and to purchase communications and radar equipment. For State, about 60 percent of funds—over $335 million—were obligated to procure guard protection services at U.S. embassies and other facilities. For USAID, nearly half of its funds—more than $9 million—were obligated to support construction of roads and highways and to procure education and training services. We found coding errors in FPDS-NG for contracts awarded by DOD, State, and USAID. Specifically, of the 62 contracts we selected for our sample, 28 were reported in FPDS-NG as being awarded noncompetitively using the urgency exception, but were not. Of the 28 contracts that were not correctly reported in FPDS-NG, three contracts were awarded using other exceptions to competition, such as national security, five were awarded using a unique authority to award noncompetitive contracts at USAID, and 20 contracts were awarded using simplified acquisition procedures. We note that our sample is not representative of all dollars obligated for federal contracts. We found three Air Force contracts that were incorrectly coded in FPDS- NG as awarded using the urgency exception to competition, but the contract file documentation confirmed that these contracts were awarded noncompetitively on the basis of national security concerns or that only one vendor could supply the requirement. Of these three contracts, two were awarded noncompetitively to procure an unmanned aircraft system and replacement aircraft accessories on the basis that disclosure of the agency’s need would compromise national security. Officials explained that these coding errors were due to an administrative oversight. For the remaining contract which was awarded on the basis that only one contractor could perform the work, officials identified and corrected the error during a routine inspection that occurred after we included the contract in our sample. We identified five USAID contracts that were noncompetitive awards using an authority unique to USAID that allows the agency to award noncompetitive contracts where competition would impair or otherwise have an adverse effect on programs conducted for the purposes of foreign aid, relief, and rehabilitation. FPDS-NG does not include an option to report noncompetitive awards using USAID’s unique exception to competition. Thus, according to USAID officials, contracting staff use their professional judgment to choose an option in FPDS-NG that most closely matches the circumstances of the award. This may include reporting the noncompeted award as being procured using the urgency exception. We did not assess the extent to which USAID consistently reported these awards using the urgency exception versus other exceptions to competition. Twenty of the contracts were awarded for less than $150,000 using simplified acquisition procedures. These contracts were awarded noncompetitively on the basis that the good or service was required immediately and only one source was deemed reasonably available. The contract file documentation showed that DOD, State, and USAID procured various goods and services—including water purification systems, furniture storage space, drapes and holiday gifts—using streamlined procedures for simplified acquisitions. However, when recording the reason these contracts were not competed in FPDS-NG, DOD, State, and USAID incorrectly reported that these contracts were awarded using the unusual and compelling urgency exception to full and open competition. Contracting officials attributed these coding errors to an administrative oversight and some officials admitted to confusion about how to input data in FPDS-NG. Such inaccurate reporting adds to existing concerns about the reliability of some data elements in FPDS-NG which GAO has reported on previously. For the 20 contracts awarded using simplified procedures, we found that the potential for confusion arises when agencies are directed in FPDS- NG to record the solicitation procedure and the extent to which a contract was competed. Based on our analysis of contracts reported as using the urgency exception to competition, when reporting which solicitation procedure was used, contracting staff frequently selected the entry labeled “only one source.” This denotes that the agency did not solicit offers from potential vendors because it determined only one source was reasonably available given the urgent need. Agencies also selected the entry labeled “not competed” when reporting the extent to which a contract was competed. As figure 4 illustrates, when contracting staff select “only one source” in the solicitation procedure field, FPDS-NG validation rules do not allow the selection of simplified acquisition procedures in the field for the “reason not competed”. For these contracts, per FPDS-NG instructions, contracting staff should have reported these awards as “simplified acquisition” under solicitation procedure and “not competed under simplified acquisition procedures” in the extent competed field. Consistent with FPDS-NG database instructions, using this approach would restrict the data entry options available when reporting the reason not competed to simplified acquisition procedures. During the course of our review, the Defense Logistics Agency (DLA) within DOD used this approach to correct the entry in FPDS-NG for two awards under $150,000 that are now recorded as sole-source awards using simplified acquisition procedures which is consistent with the records maintained in the corresponding contract files. In 2010, DOD issued guidance to, among other things, help improve the quality of data reported in FPDS-NG for contracts awarded using simplified acquisition procedures. For fiscal years 2010 through 2012, data in FPDS-NG showed that collectively DOD, State, and USAID reported 13,040 noncompetitive contracts under $150,000 as being awarded under the urgency exception. The total obligation for these contracts was over $284 million. Ensuring contracts are correctly coded in FPDS-NG is critical as the data are used to inform procurement policy decisions and facilitate congressional oversight. After excluding the contracts we identified with data errors, we found that 34 contracts were awarded noncompetitively using the unusual and compelling urgency exception to competition to meet a range of urgent Our sample included DOD contracts that were awarded to situations.meet urgent operational needs for combat operations in Afghanistan and Iraq, including two contracts that highlight the risk of using the urgency exception for research and development initiatives to immediately field capabilities for combat operations. In addition, our sample consisted of contracts awarded to avoid a lapse in program support resulting from firms protesting the award of a competitive contract or from changes in program requirements. Generally, noncompetitive awards—such as those using the urgency exception—must be supported by written justifications that contain the facts and rationale to justify use of an exception to competition. While justifications in our sample generally contained the required information, some fell short of the FAR requirements and did not obtain the necessary signatures or make justifications publicly available. Other justifications were written ambiguously in terms of including other facts supporting the use of the urgency exception, such as the nature of the harm to the government. For the 34 contracts in our sample, DOD, State, and USAID cited a range of urgent situations that precluded full and open competition. More than half of the contracts were awarded to procure goods and services to support various missions in Afghanistan and Iraq. The two most common reasons agencies cited for awarding noncompetitive contracts on the basis of urgency—sometimes for the same contract—were to meet urgent operational needs for combat operations and to avoid a gap in program support resulting from unanticipated events. The remaining contracts supported unique circumstances such as fulfilling increased demand for fuel, providing telecommunications support for a foreign delegation visit, obtaining equipment for an unscheduled naval mission, and addressing emergency vehicle repairs. See Appendix II for a summary of the 34 contracts in our sample. DOD awarded 16 contracts, valued at $1.2 billion, to rapidly acquire and provide capabilities to meet urgent needs that, DOD maintained, if not addressed immediately, would seriously endanger personnel or pose a threat to ongoing combat operations. These needs involved two primary capabilities: intelligence, surveillance, and reconnaissance (ISR)—such as airships—and systems to protect against attacks from improvised explosive devices. DOD’s acquisition policy states that urgent operational needs are among the highest priority acquisitions and identifies the urgency exception as one of the tools available to provide urgently needed capabilities to the warfighter more quickly by reducing the amount of time needed to award a contract. In an April 2012 report, GAO found that some DOD programs to meet urgent operational needs were able to reach contract award sooner by relying on urgent noncompetitive awards; however, this reliance could affect the price the government pays. Recognizing the need to quickly deliver capabilities in urgent conditions— sometimes within days or months—DOD policy calls for delivering solutions for urgent needs within 24 months of when the urgent operational need is identified and validated. Within the sample of urgent DOD awards we reviewed, many of the capabilities acquired in response to urgent operational needs were fielded within 3 to 20 months of when the requirement was identified. For example, by relying on an existing technology, the Army was able to quickly field 29 surveillance aerostats within 10 months of validating the urgent operational need. DOD contracting officials we spoke with told us they expect decreases in the number of noncompetitive awards on the basis of urgency to meet urgent operational needs because of the drawdown in military operations in Iraq and Afghanistan. Two DOD awards within our sample highlighted the risks of using the urgency exception to competition to award contracts for research and development initiatives to meet immediate combat operation needs. DOD policy identifies efforts best suited for rapid fielding of urgently needed capabilities as those that do not require substantial development effort, are based on proven and available technologies, and can be acquired under a fixed price contract. In the first example, the Air Force’s Blue Devil Block 2 program used the urgency exception to purchase previously unproven technology for improved ISR capability. GAO and the DOD Inspector General have previously reported that the program faced various challenges. In 2009, the Air Force first identified the need for the program following a presentation from the contractor for an airship concept that could take off and land vertically while requiring fewer personnel to assist with landing than traditional airships. In March 2010, the contractor submitted an unsolicited proposal to the Air Force for the development of the Blue Devil Block 2 airship. The Air Force Research Laboratory (AFRL) conducted an assessment of the program and determined that the proposed 24 month schedule was aggressive and likely unachievable. As an alternative, AFRL proposed a strategy to develop the airship in 34 months using competitive procedures. In September 2010, the Secretary of Defense designated the procurement of the airship as an urgent need to be rapidly acquired, and set the expectation that it be deployed within 13 months. The following month, AFRL conducted a second assessment of the program and again determined that it was not suitable for rapid fielding within 24 months due to concerns about the technical capability of the contractor and poorly defined requirements. Despite AFRL’s assessment, the Air Force awarded an $86.2 million contract in March 2011 based on urgency for delivery of the airship in January 2012. In an October 2012 report, GAO found that this program experienced significant technical problems resulting in cost overruns and schedule delays that led to termination of the program in June 2012, which was 5 months after the planned fielding date of January 2012. Ultimately, the program was terminated after spending more than $149 million and without fulfilling the urgent requirement for a deployable ISR airship. In the second example, the Air Force awarded a noncompetitive contract in 2011 on the basis of urgency for the development of the Orion unmanned aerial system in response to an urgent operational need for ISR capabilities to support multiple services in Afghanistan. The new system was expected to provide greater uninterrupted flight times than other available systems. The contract was awarded with a period of performance of 13 months; however, the justification did not specify a fielding date. After award, the cost of the contract nearly tripled from the initial estimate of $5 million to a total of about $15 million and the period of performance doubled from 13 months to 26 months. Contracting officials said that cost increases and schedule delays were due to technical problems experienced by the contractor in developing the proposed technology. Nearly 3 years since the requirement was validated as an urgent need, this system has not been fielded in Afghanistan and is still under development through a follow-on contract. GAO has previously found that initiatives for urgent operational needs that required technology development often take longer to field.Contracting officials we spoke with said that both programs relied heavily on unproven technologies that required extensive research and development, which contributed, in part, to the cancelation and delay of the Blue Devil and Orion systems, respectively. Further, officials noted that initiatives to respond to urgent operational needs tend to have more successful outcomes when the solutions are based on proven, mature technologies. DOD, State, and USAID awarded 12 noncompetitive contracts when unexpected events threatened the agencies’ ability to continue program support. Referred to as bridge contracts, such awards are typically short term to avoid a lapse in program support while the award of a follow-on contract is being planned. The contract period for the 12 bridge awards in our sample averaged 11 months and collectively they were valued at a total of over $466 million. For the bridge contracts that we reviewed, the delay in awarding a competitive contract was due to unforeseen personnel changes, competitors filing bid protests, and changes in program requirements, among other things. For 10 of the 12 bridge contracts in our sample, agencies awarded the contract to a vendor that had previously performed the work. In one instance, the Air Force’s plans to competitively award a follow- on contract for engineering support services was disrupted by the unexpected loss of the program manager who had specialized expertise and had been working independently on the acquisition strategy for about 1 year. A new program manager was assigned but had difficulty accessing his predecessor’s files which further delayed the acquisition effort. The justification cited that a gap in program support would jeopardize ongoing research and expose the Air Force to certain fines and penalties. Faced with the possibility of costly delays estimated to be $1.5 million per month, the Air Force opted to award a non-competitive bridge contract to the incumbent vendor who was deemed most capable of meeting the program’s requirements within the timeframes needed. Officials determined that the 12-month bridge contract valued at $1.4 million would provide sufficient time to complete market research and other acquisition planning for the subsequent award. In another instance, State awarded a competitive follow-on contract to provide operational and maintenance support services at the U.S. Embassy compound in Baghdad, Iraq, three months before the existing contract expired. However, the new contract, which was awarded to the incumbent vendor, was protested, thus preventing the contractor from starting performance on the contract. Citing concerns about the health and safety of 4,200 U.S. government personnel, State awarded a 6-month bridge contract valued at $38 million to the incumbent vendor to ensure continuity of services. The justification cited that transitioning to a contractor other than the incumbent vendor would take at least 90 days because of visa processing, among other things. A USAID program for oversight, outreach, and legislative assistance to the National Assembly of Afghanistan was extended to ensure continued support to the Afghan parliament during the upcoming budget cycle. The contract USAID planned to use to obtain these services was not awarded in time. As a result, USAID awarded a 7- month bridge contract to the incumbent vendor with a value of $5 million to avoid a break in service, with the goal of awarding a competitive follow-on contract during the parliament’s upcoming summer recess. USAID determined that for the bridge contract, a new vendor could not transition within the time available due to security clearances and other requirements. Justifications in our sample generally contained the required information; however, some fell short of the FAR requirements and did not obtain the necessary signatures, among other things. Other justifications were written ambiguously in terms of including other facts supporting the use of the urgency exception, such as the nature of the harm to the government. Consistent with acquisition regulations, DOD, State, and USAID prepared written justifications for all of the 34 urgent contracts in our sample to include the required elements specified in the FAR—such as the contracting officer’s determination that the cost will be fair and reasonable and the extent of the agencies’ market research efforts. In addition, nearly all of the justifications that we reviewed were prepared and approved prior to award. The FAR permits agencies to prepare and approve justifications within a reasonable time after contract award when doing so prior to award would unreasonably delay the acquisition. This provision is unique to use of the urgency exception, as preparation and approval of the justification prior to award can delay the quick response time needed to meet urgent needs. Contracting officials told us that they work together with the program office to prepare justifications and, at a minimum, obtain verbal approvals when it is unlikely the justification can be routed to approving officials prior to award. In three instances, however, we found justifications were not signed by the appropriate approving authority as required by the FAR. For example, a USAID justification to award a $5 million contract to support missions in Afghanistan was not signed by the Competition Advocate due to an administrative oversight. In a second example, officials at the Defense Logistics Agency did not obtain the necessary written approvals justifying a $32 million award for an emergency fuel purchase also due to an administrative oversight; however, according to officials, verbal approvals had been obtained from all requisite approving officials including the head of the contracting activity prior to award. After we brought this to their attention, DLA officials obtained the necessary signatures on the justification for this contract, even though the contract was complete. The office that awarded this contract has since put a tool in place to route justifications and ensure appropriate signatures are obtained in a timely manner. Lastly, an Air Force justification for a noncompetitive award valued at $130 million was not signed by the Senior Procurement Executive, as required by the FAR. Officials we spoke with could not confirm the reason the procurement executive did not sign the justification and this individual is no longer employed by the Air Force. We also found four DOD justifications that were not signed in time to meet the FAR requirement to make them publicly available within 30 days of award. While the FAR permits agencies to prepare and approve justifications awarded using the urgency exception within a reasonable time after award; it does not identify a specific timeframe to do so. However, agencies are required to make justifications for urgent noncompetitive contracts available on the FedBizOpps website within 30 days of award. Thus, agencies would need to prepare and approve justifications within 30 days in order to meet FAR requirements to make them publicly available. DOD contracting officials who administered the awards included in our sample told us it is customary to obtain verbal approvals on justifications for noncompetitive procurements prior to award; but such justifications are not made publicly available without signatures from the appropriate approving official. The justifications for these contracts were not signed until 58 to 314 days after award, meaning that the fully approved justifications were not made publicly available within 30 days of award as required by the FAR. In one of these cases, an Army justification for a 3-month noncompetitive award for satellite equipment was not prepared and approved, in writing, until 138 days after award. Citing concerns about compliance, the Army attorney who provided legal review of the justification noted the absence of regulatory procedure or agency policy when approval of the justification has far exceeded the timing requirements for making the justification publicly available. Officials subsequently posted the justification which occurred more than 6 months after the contract award date and 90 days after the contract expired. Some DOD, State, and USAID contracting officials we spoke with emphasized the importance of complying with the posting requirement to provide transparency into agency’s contracting activity, but others were unsure of the appropriate course of action when approval of the justification does not occur within 30 days. For 15 of the justifications we reviewed, we could not confirm whether justifications were posted to the FedBizOpps website within the required timeframes, or at all, as no documentation was available. The FAR does not require agencies to maintain documentation that the justification was made publicly available; however, we found that in some instances, officials printed the confirmation page from the FedBizOpps website to document compliance with the requirement. After we brought this issue to their attention, officials subsequently posted five justifications. Some officials told us that justifications were not posted due to an administrative oversight. DOD, State, and USAID have developed guidance to implement FAR requirements for justifications for noncompetitive contracts; however, none of the agencies have included instructions addressing how staff should document compliance with the FAR requirement to make justifications publicly available or what to do when the justification is not approved and ready for posting within 30 days. USAID officials acknowledged the benefit of documenting that the justification was posted to demonstrate the posting requirement was met and told us they would include a process to do so in their guidance to contracting officers. Standards for Internal Control in the Federal Government states that internal controls—such as agency policies and procedures—are an integral part of an organization’s management function that provides reasonable assurance of compliance with laws and regulations. Control activities include the creation and maintenance of records which provide evidence of implementation, such as documentation of transactions that is readily available for inspection. Contracting officials told us that documenting their efforts to make the justification publicly available provides transparency into the steps they took to comply with applicable regulations. Generally, justifications for the 34 contracts in our sample described the nature of the harm—serious financial or other risk—facing the government if the award were to be delayed using traditional competitive procedures. Some justifications were specific about the potential harm to the government while others were ambiguous. The FAR states that agencies may use the urgency exception when delay of the contract award would pose a serious injury—financial or other—to the government. The FAR further provides that agencies are to include other facts, such as data, estimated cost, or other rationale as to the extent and nature of the harm to the government —which provides some flexibility about how such risks should be described in the justification. Most of the justifications in our sample—23 of 34—quantified the potential harm to the government if the acquisition was delayed by using competitive procedures in terms of potential dollars lost or schedule delays. The Air Force and the Army have guidance for preparing justifications which states that the most critical aspect of justifications that cite use of the urgency exception is quantifying the nature of the serious injury to the government if the urgent requirement is competed.Accordingly, nearly all of the Air Force and Army justifications we reviewed estimated the potential costs to the government if the urgent award were to be delayed by using competitive procedures. For instance, one Air Force justification for an unmanned aerial system explained that no other vendors had mature technology and estimated that the effort of bringing at least one other vendor to an equivalent capability would cost $5.7 million and result in at least a 3-month delay. Further, the justification stated that it is unlikely these costs would be recovered through competition. Similarly, the justification for an Army contract to obtain new parts for tactical wheeled vehicles to help protect against attacks from improvised explosive devices estimated that DOD would incur $2.5 million in transportation costs if the parts were not acquired by a certain date to meet the time frame of a scheduled ship deployment. By contrast, 11 of the justifications we reviewed were more ambiguous in describing the nature of the harm if a noncompetitive contract was not awarded. Some of the ambiguous justifications described the following: At Navy, justifications for four procurements to provide goods or services related to persistent ground surveillance systems cited that only one source was available as other sources would duplicate costs or cause delays. The justifications did not provide additional information about the potential time or dollars to be saved by using the urgency exception. At State, one justification cited the need to award a noncompetitive contract using the urgency exception to provide telecommunications support for a meeting of foreign leaders. The justification emphasized the vendor’s prior experience in providing similar services, thereby facilitating the agency’s mission in the most cost-effective manner. But it did not provide additional information regarding the extent and nature of the harm posed to the government. For such events, according to agency officials, security concerns often hinder the agency’s ability to conduct advanced planning; thus necessitating the use of the urgency exception. Two USAID justifications did not fully describe the harm to the government. The justifications cited the need to comply with a requirement to conduct annual financial audits of local expenses incurred in Afghanistan; however, the extent and nature of the serious risk of financial or other injury to the government was not stated. Officials justified the noncompetitive award citing a shortage of qualified audit firms in Afghanistan which could affect USAID’s ability to meet auditing requirements. Thus, awarding a noncompetitive contract would provide an opportunity to assess the audit capabilities of the selected contractor, develop a pool of acceptable audit firms to choose from thereby increasing competition for future procurements. The FAR limits the total period of performance of contracts awarded using the urgency exception to 1 year—unless a determination from the head of the agency is made that exceptional circumstances apply. This provides the time necessary to enter into another contract using competitive procedures, which reduces the risk of overspending. Within our sample of 34 contracts, nearly a third of the contracts—10—had a period of performance that was more than 1 year, either established at the time of award or during performance of the contract. Two of these contracts exceeded 1 year at award and the contracting officers did not obtain a determination from the head of the agency, as required by the FAR. The remaining eight contracts were extended beyond 1 year through subsequent modifications, which contracting officials considered separate contract actions that, in their view, would not require a determination by the head of the agency. Treating modifications to contracts awarded on the basis of urgency as separate rather than cumulative contract actions makes it harder for senior department officials to provide oversight over significant increases in contract cost. DOD, State, and USAID conducted limited competitions for 4 of the 34 contracts in our sample by using knowledge from recent competitions to solicit multiple bids. Lack of technical data rights and reliance on the expertise of the contractor limited the agencies’ abilities to seek competition. We found that 10 of the 34 contracts in our sample had either a period of performance of more than 1 year at the time of award or, the period of performance was extended beyond 1 year during performance of the contract. The FAR provides that the contract period should be limited to the time required to meet the urgent need and for the agency to compete and award another contract for the required goods and services. In addition, contracts may not exceed more than 1 year unless the head of agency, or their designee, determines that exceptional circumstances apply. Limiting the length of noncompetitive awards under the urgency exception helps minimize the risk of overspending while providing sufficient time for the agency to enter into another contract using competitive procedures, according to officials from the OFPP—the office within OMB which provides governmentwide guidance on federal contracting. In 2 of the 10 cases, we found that DOD officials did not seek a determination from the head of the agency or appropriate designee, as required, for awards where it was known at time of award that the period of performance would be more than 1 year. In one of these instances at the Navy, the justification estimated a 9-month contract period; however, the award documents indicate a total period of performance of 17 months that included 5 months for delivery of upgraded radio kits to protect against attacks from improvised explosive devices followed by 12 months of engineering support services. Navy officials acknowledged that the period of performance at time of award was ambiguous and said they did not seek a determination because the terms and conditions of the contract had not yet been finalized at the time work began. In the other case at the Air Force, work on the contract began under a pre-contract cost agreement prior to the contract award date and the contract identifies the period of performance as beginning on the date the pre-contract cost agreement started. As a result, the period of performance at award was 13 months. Air Force officials told us they did not get the determination because the contract was expected to end within 12 months of the award date and they did not consider the pre-contract cost agreement as part of the period of performance for the contract, contrary to the language in the contract. In the remaining eight instances, we found that DOD, State, and USAID used different approaches to modify the contracts to extend the period of performance beyond 1 year after the initial contract award. While the FAR clearly limits the period of performance to 1 year, it does not address whether the duration requirement applies only at the time of award or if the requirement applies to cases where the contract is modified to extend the period of performance after award. Contracting officials did not seek a determination of exceptional circumstances from the head of the agency or appropriate designee before extending the period of performance beyond 1 year. In four of the eight instances, officials extended the period of performance through contract modifications and no additional action was taken. For the remaining four instances, officials exercised a contract clause to extend the contract or prepared an additional justification to extend the contract beyond 1 year. In two of the eight cases, contracting officials extended the contracts beyond 1 year by using a contract clause that allows agencies to extend services up to 6 months within the limits and at the rates specified in the contract.requirement applies, the FAR is not clear about whether a determination of exceptional circumstances is necessary when using this clause results in the contract’s period of performance exceeding a year. In one of the two cases, for example, State awarded a 1-year bridge contract (4 month For urgent, noncompetitive contracts where the 1 year duration base contract with two 4 month options) to the incumbent vendor—with an estimated value of $100 million—using the urgency exception to procure local guard services in Afghanistan. The bridge contract was intended to allow time for a new contractor—the awardee of a competitive award—to set up its operations to take over the performance of these services. However, during the bridge contract period of performance, State realized that the new awardee could not establish its operations to meet the timeframes required in the contract and therefore defaulted, leading to termination of the competitive award. To allow sufficient time to re-compete the requirement and transition to a new vendor, State used the contract clause to extend the existing bridge contract for an additional 6 months but did not seek a determination of exceptional circumstances. The additional time increased the total period of performance to 18 months and added $78 million to the contract value—which included retention bonuses to provide an incentive for guards to continue working in the event the contract was extended beyond 1 year. The contracting officer did not take into consideration whether using this clause would have any bearing on the requirement to seek a determination of exceptional circumstances for an urgency award exceeding 1 year. Contracting officials we spoke with maintained that an additional determination was not required because the terms of the contract included the clause to extend services. As a result, no additional oversight occurred when the contract’s period of performance exceeded 1 year. In two of the eight cases, we found that contracting officials extended the period of performance beyond a year after award through the preparation of additional justifications for other than full and open competition. Contracting officials at DOD, State, and USAID stated their belief that urgency contracts could be extended without a determination of exceptional circumstances by the head of agency when using this approach. In their view, each additional justification represents an individual contract action; thus, there is no need to consider the cumulative effect of a modification resulting in a period of performance beyond 1 year. In one of these instances, the Army TACOM Life Cycle Management Command identified an urgent operational need for 513 medium armored security vehicles and related support—valued at more than $757 million to support the Afghanistan National Army. To meet the timeframes for delivery, officials determined that the circumstances warranted use of a noncompetitive award on the basis of urgency to one contractor with prior experience on similar acquisitions. After identifying a multi-year requirement to provide vehicles to the Afghanistan National Army, contracting officials planned to fulfill the requirement in multiple phases. For the first phase, the Army decided to purchase 73 vehicles and prepared a noncompetitive justification citing the urgency exception for an award valued at $85.1 million—which is $0.4 million below the level that requires approval from the Senior Procurement Executive. The first phase was awarded in January 2011 as an undefinitized contract action which allows the contractor to start the work quickly without negotiating all the terms and conditions for a contract. However, 3 months into the performance of the contract, another justification for a modification to the undefinitized contract action was approved by the Senior Procurement Executive using a different exception citing that only one responsible source could fulfill the procurement of the remaining 440 vehicles needed to meet the requirement. The additional justification for the second phase, valued at more than $576 million, added 3 years (a base year and two 1- year options); extending the procurement to 2014. According to the contracting officer, a determination from the head of the agency was not obtained as the second phase is considered a separate action rather than cumulative; thus the period of performance under the urgency justification would not exceed 1 year. Approving the requirement between two justifications allowed for the initial contract award to be within the 1 year period of performance as required. However, when the contract exceeded a year with the subsequent modification on the grounds that only one source was capable; no additional oversight was performed by the head of the agency to determine if exceptional circumstances applied. . See appendix 3 for additional information about the 10 contracts in our sample where the period of performance was more than 1 year. In addition to extending the period of performance through the preparation of additional justifications, we found instances where agencies also increased funding on urgency contracts after award to well beyond the original contract value. For seven contracts in our sample, total obligations increased by more than 30 percent from the original estimate in the justification that was approved to award a noncompetitive contract using the urgency exception. In these cases where total obligations grew considerably, contracting officials did not alert senior procurement officials when these increases occurred. In one example, Navy officials modified a contract to provide training and other support services for a surveillance system several times which grew to more than three times the estimated value. The contract value at time of award was $30 million, and the justification was signed by the head of the procuring activity, as required by the FAR for awards greater than $12.5 million but not exceeding $85.5 million. However, after award, three modifications added a total of $31 million to the contract primarily due to cost overruns. Additionally, four modifications with accompanying justifications— because the work being added was determined to be outside the original scope of the contract—added $12 million each time, bringing the total contract value to more than $109 million, as illustrated in figure 5. The four supplemental justifications were each approved by the competition advocate, as required. However, because each justification is contemplated individually without regard to the cumulative value of the contract, the senior procurement executive did not have an oversight mechanism to be made aware of the cost growth on this contract. If this contract had been estimated to be $109 million at the time of award, the senior procurement executive would have had oversight through approval of the justification. The FAR does not provide guidance to contracting officials on the degree of oversight senior agency officials should exercise when contract modifications significantly raise the cumulative dollar value of a contract awarded using the urgency exception. Using multiple modifications to increase funding or extend the period of performance and treating these as separate rather than cumulative contract actions make it harder for senior department officials to provide oversight. Competition advocates at DOD, State, and USAID had differing views about whether contract modifications to extend the period of performance should be considered cumulatively. Some thought the actions should be viewed cumulatively as it provides greater oversight; while others thought the actions should be considered independent of each other. OFPP officials told us that generally, the total period of performance of urgent contracts should be considered cumulatively either at time of award or when the contract is modified to extend the period of performance to more than 1 year. While OFPP officials stated that the period of performance should be considered cumulatively, the FAR does not specify what to do when these contracts are modified beyond 1 year, and no guidance is available on what to do in these situations. As the entity responsible for federal procurement policy, OFPP is best suited to clarify when determinations of exceptional circumstances are needed when extending the period of performance on an urgency contract beyond 1 year. Additionally, in the absence of an oversight mechanism for noncompetitive contracts awarded under the urgency exception with significant increases in value over time, senior procurement officials are not assured of the transparency necessary to help strengthen accountability for these situations. Standards for Internal Control in the Federal Government calls for organizations to maintain proper controls that ensure transparency and accountability for stewardship of government resources. Even in urgent situations, agencies are required to seek offers from as many potential sources as practicable given the circumstances and some programs in our sample were able to use the prior expertise or market knowledge to hold limited competitions. For 4 of the 34 contracts in our sample, DOD and State sought competition by seeking offers from firms with which the agency had prior experience through recent procurements and reasonably believed could perform the work in the time available. For example, to avoid an unanticipated disruption in supplying fuel to the government of Israel, DLA solicited four firms the agency had worked with on similar acquisitions in the past. In the justification, DLA cited that having the benefit of market information from prior competitive awards helped the agency reduce the time it would normally take to compete the new procurement. In another instance, State had an existing indefinite delivery indefinite quantity contract in place for the purchase of ballistic resistant doors for embassy security. During embassy renovations it was determined that the required specification could not be met through the negotiated terms of the existing contract. State officials conducted a limited competition among three vendors who were approved vendors under the existing contract. Further, State justified this limited competition as the preferred cost-effective method as these three vendors had been the lowest bidders on similar procurements, thus, limiting the risk of overspending. For 10 DOD contracts in our sample, the government was unable to compete requirements because of a lack of access to technical data packages or proprietary data coupled with the urgency of the requirement. In some instances, program officials explored the possibility of acquiring the data only to learn that the package was not available for sale or would be cost-prohibitive. In our prior work, DOD described the acquisition of technical data for weapon systems, such as design drawings and specifications as critical to enabling competition throughout a system’s life cycle. Within our sample, we found in one instance where DLA was in the process of negotiating the purchase of the technical data, but the purchase could not be completed in a timeframe that would have allowed competition for the urgent requirement. While DLA was not able to benefit from the purchase of the technical data package for the current award, the agency would be better positioned to compete future procurements. For more than a decade we have reported on the limitations to competition when DOD does not purchase technical data rights and the increased costs as a result. In May 2013, DOD implemented guidance for program managers to consider acquiring technical data rights as part of the acquisition planning. In examining the procurement history for two contracts in our sample, we found one DOD program involving an aerostat, the Persistent Threat Detection System (PTDS) that spanned 10 years without achieving competition for the acquisition of the aerostat. The Army has awarded six noncompetitive contracts for the PTDS program since 2004 on the basis of urgency to the same contractor, as illustrated in figure 6. While awarding the fourth PTDS contract in February 2011, the Army identified six capable sources and determined that competition was viable for long-term non-urgent requirements. However, the Army determined that only one contractor with prior experience could satisfy the urgent requirement at that time. In May 2011, the Army awarded a noncompetitive urgency contract to that contractor for 29 aerostats and cited the future need for spares in fiscal year 2012 when additional funding would be available. The senior procurement executive approved the May 2011 urgency justification nearly a year after award—due to confusion about changes in the review process—on the condition that all future related procurements be competed. However, by the time that the senior procurement executive approved the justification for the May 2011 urgent award, the Army had already awarded the follow on urgent award for spares in December 2011. While the Army identified six sources that were capable of competing and providing the PTDS aerostat in 2011, it ultimately awarded a 3-year noncompetitive contract to the same contractor valued at $306 million on the basis of “only one responsible source” because the government did not own the technical data packages. Despite the recurring nature of the requirement, Army officials reported that it was difficult to plan for competition because each requirement was short term in nature. In 2013, DOD concurred with our recommendation to develop guidance to enable DOD components to apply lessons learned from past procurements to increase competition for the same goods and services. This recommendation was, in part, intended to help senior department officials capture the benefit of information on past procurements when approving individual justifications for subsequent noncompetitive awards. To address concerns about missed opportunities for learning why past acquisitions were not competed and to help remove barriers to competition in future procurements, in April 2013, the Air Force implemented a new policy to include justifications from predecessor acquisitions as a reference document to justification approving officials. Air Force officials observed that contracting officers were splitting requirements across multiple justifications at lower approval thresholds, which reduced oversight by higher level approving officials. The Navy has a similar policy in place and DLA officials told us they are planning to implement a similar process. The Army, which has added increased scrutiny of justifications particularly for urgency awards as one its goals for improving competition, is in the process of revising its guidance for preparing justifications to include a process similar to that of the Air Force. The benefits of competition—such as cost savings and improved contractor performance—in acquiring goods and services from the private sector are well documented. Awarding a noncompetitive contract on the basis of unusual and compelling urgency is necessary in select circumstances. However these contracts should be limited in duration to minimize the amount of time that the government is exposed to the risks of contracts that are awarded quickly without the benefits of competition. Mechanisms for transparency and oversight of these contracts —such as posting justifications publicly and a determination that exceptional circumstance apply to extend the contract period of performance beyond 1 year—are necessary to ensuring that they are used only in circumstances when no other option is available and to promote competition in the future. Transparency and oversight during performance of the contract, particularly when adding significant time or money, ensures that the government is making sound decisions in the best interest of taxpayers. In light of this, having OFPP provide guidance to clarify when determinations are needed when extending the period of performance on an urgency contract could help achieve consistent implementation of the duration requirement across the government. Additionally, having agencies develop mechanisms to ensure that senior procurement officials are made aware of noncompetitive contracts with significant increases in value, particularly those that were not initially approved at the senior procurement executive level, could help to increase transparency in noncompetitive awards and strengthen oversight. And finally, although the data show that DOD, State, and USAID buy a relatively small amount of goods and services noncompetitively on an urgent basis, maintaining reliable data is critical to ensuring that agencies can effectively manage the use of this exception. To help improve reporting of federal procurement data and strengthen oversight of contracts awarded on the basis of an unusual and compelling urgency, we recommend the Secretaries of Defense and State and the Administrator of the U.S. Agency for International Development take the following four actions: Provide guidance to contracting staff on the correct procedures for accurately reporting competition data for contracts using simplified acquisition procedures that are awarded on an urgent basis and DOD should re-emphasize existing guidance. Establish a process for documenting that justifications were posted in compliance with the requirements in the FAR. Provide guidance to contracting staff on what actions to take when required signatures are not obtained in order to post the justifications within 30 days. Develop an oversight mechanism when the cumulative value of noncompetitive contracts awarded on the basis of unusual and compelling urgency increases considerably beyond the initial contract award value. To help ensure consistent implementation of the FAR requirement to limit the period of performance for noncompetitive contracts using the unusual and compelling urgency exception, we recommend that the Director of the Office of Management and Budget, through the Office of Federal Procurement Policy, take the following action: Provide guidance to clarify when determinations of exceptional circumstances are needed when a noncompetitive contract awarded on the basis of unusual and compelling urgency exceeds 1 year, either at time of award or modified after contract award. We provided a draft of this report to DOD, State, USAID, and OMB for their review and comment. We received written comments from DOD, State and USAID, which are reproduced in appendices IV through VI. OMB provided comments via email. The agencies generally agreed with the recommendations and in most cases described planned actions in response. We also received technical comments from DOD, which we incorporated as appropriate. DOD concurred with three recommendations and partially concurred with one recommendation. In written comments, DOD stated that the Defense Procurement and Acquisition Policy (DPAP) office will issue guidance to the contracting activities to remind them of instructions on completing data fields in FPDS-NG for simplified acquisition procedures, and to clarify documentation related to posting of justifications and actions to take when approval signatures are not obtained within the 30-day posting requirement. These actions are responsive to three of our four recommendations. For the fourth recommendation, DOD’s proposed action to issue guidance emphasizing that the cumulative value of a contract should be considered when obtaining approval to increase the value of a contract awarded under the urgency exception does not fully address our recommendation. Although this guidance could be helpful, the recommendation was to develop an oversight mechanism for when contracts awarded under the urgency exception increase in value considerably over the course of time. The oversight mechanism would provide higher level contracting officials with visibility into awards that grow in small dollar increments that do not meet the thresholds for a new justification. One example of how this mechanism could be implemented would be to require approval at a higher level when the cumulative contract value increases by a certain percentage. In written comments, State agreed with the recommendations, stating it will seek to implement them by August 30, 2014. State did not provide any further details on implementation plans. In its written response, USAID concurred with the recommendations, and outlined steps it will take in response to them, including assessing and updating current guidance, policy and training components. In an email response, OMB agreed that there is a need for clarification regarding the use of exceptional circumstance determinations when contracts awarded using the urgency exception exceed 1 year. These officials further noted that they intend to work with the FAR Council, which updates the FAR, to discuss the issues raised in the report about the current FAR language and the best way to address those issues. We are sending copies of this report to the Director of OMB, the Secretaries of Defense and State, and the Administrator of the U.S. Agency for International Development, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or martinb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The objectives for this review were to examine (1) the pattern of use of the unusual and compelling urgency exception, including the range of goods and services acquired by the Departments of Defense (DOD) and State, and US Agency for International Development’s (USAID); (2) the reasons that agencies awarded noncompetitive contracts on the basis of urgency and the extent to which justifications met requirements in the Federal Acquisition Regulation (FAR); and (3) the extent to which agencies limited the duration of the contract and achieved competition. To address these objectives, we used data in the Federal Procurement Data System-Next Generation (FPDS-NG), which is the government’s procurement database, to identify DOD, State, and USAID obligations for noncompetitive contracts awarded using the unusual and compelling urgency exception. We selected a non-generalizable, random sample of 62 contracts by using data from FPDS-NG to analyze which components within DOD, State, and USAID had the most obligations using the urgency exception for fiscal years 2011 and 2012, which, at the time, were the most recent fiscal years for which data was available. To narrow our focus on which contracts to include in our review, we identified the contracting offices within DOD, State, and USAID that had the largest total obligations for contracts reported as being awarded using the urgency exception under the “reason not competed” field in the FPDS-NG database. We then selected contracts that represented a mix of large and smaller dollar awards and types of products and services procured. Table 2 below shows the contracting offices and the number of contracts included in our review. To assess patterns in DOD’s, State’s, and USAID’s use of the unusual and compelling urgency exception to competition, we analyzed data from the Federal Procurement Data System-Next Generation (FPDS-NG). We included contracts and task orders coded as using the urgency exception under the field “reason not competed” from fiscal years 2010 through 2012 which represented the time period after the requirement to limit the duration of urgent contracts went into effect and reflected the most current reliable data to show trends over time. We determined that a contract was miscoded if it was coded in FPDS-NG as being awarded under the urgency exception, but our analysis of the contract file documentation showed that the contract was awarded using other procedures—such as the streamlined procedures under the simplified acquisition threshold—generally $150,000—or other exceptions to full and open competition such as national security. We analyzed DOD, State, and USAID obligation data and the types of goods and services based on product code fields. We assessed the reliability of FPDS-NG data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them; and (3) comparing reported data to information from contract files that we sampled. In our analysis we excluded awards that were below $150,000 because of the high likelihood that these procurements followed simplified acquisition procedures which are separate from the procedures that apply to the urgency exception. Taking this approach allowed us to account for known data limitations. Thus, we determined that the federal procurement data were sufficiently reliable to examine patterns in DOD’s, State’s, and USAID’s use of the urgency exception. To compare use of the urgency exception versus other exceptions to full and open competition, we conducted an analysis of the other values listed under the “reason not competed” field. In addition, we reviewed FPDS-NG instructions to identify protocols for entering contract data in the database and interviewed DOD, State, and USAID contracting officials and FPDS-NG subject matter experts about their procedures and processes for entering data in the procurement database. To assess the reason that agencies awarded noncompetitive contracts on the basis of urgency and the extent to which justifications met FAR requirements, we performed an in depth review of contract files for 34 selected contracts that we determined—based on our review of contract file documentation and interviews with contracting officials—were awarded using the unusual and compelling urgency exception. Of the 62 contracts that we initially selected for our sample, we narrowed our analysis to 34 because 28 of the contracts were miscoded in FPDS-NG. For these 34 contracts, we reviewed contract file documentation such as acquisition plans, justifications and other documents agencies used to seek approval to limit competition on the selected contracts to determine agencies’ rationale for using the urgency exception. We also reviewed DOD, State, and USAID guidance regarding the preparation and approval of justifications to use the urgency exception. Additionally, we reviewed the Federal Acquisition Regulation and reviewed agency policies and guidance to inform our analysis of the extent to which justifications met FAR requirements such as ensuring the justifications were signed by the appropriate individuals and made publicly available within the required timeframes. We compared agencies’ policies and procedures with the Standards for Internal Control in the Federal Government which calls for documentation of transactions that is readily available for inspection, thus providing evidence of implementation. We interviewed contracting and acquisition policy officials, procurement attorneys, program officials, and competition advocates at DOD, State, and USAID to discuss the facts and circumstances regarding use of the urgency exception and agency policies and procedures to implement FAR requirements for publicly posting justifications To determine the extent to which agencies complied with the FAR requirement to limit the total period of performance of a contract awarded using the urgency exception to no more than a year unless the head of the agency makes a determination of exceptional circumstances, we reviewed contract file documents for 34 selected contracts that we identified were awarded using the urgency exception. We also conducted legal research and interviewed contracting and acquisition policy officials at DOD, State, and USAID on the implementation of the duration requirement. We reviewed contract file documentation such as contract awards to determine the estimated period of performance. Further, we reviewed contract modifications and additional justifications prepared after award to determine the actual period of performance and action taken by the agencies when extending the period of performance for urgency contracts, such as preparing additional justifications for other than full and open or exercising the option to extend services. We also interviewed officials from the Office of Federal Procurement Policy within OMB to obtain their perspectives on the approaches that DOD, State, and USAID used to address the FAR requirement to limit the period of performance or obtain a determination from the head of the agency that exceptional circumstances necessitate a period of performance greater than 1 year. We also analyzed contract documents to identify instances where DOD, State, and USAID increased funding for the 34 contracts in our sample. We assessed the implications of inconsistent implementation of the FAR requirement to limit the period of performance of urgent contracts and the absence of an oversight mechanism to monitor increases in contract value against criteria in Standards for Internal Control in the Federal Government. These criteria call for organizations to maintain proper controls that ensure transparency and accountability for stewardship of resources. To determine the extent to which DOD, State and USAID achieved competition with the use of the urgency exception, we interviewed contracting officials and reviewed contract documents, such as acquisition plans and price negotiation memorandums, to identify the barriers to competition that agencies cited and, to assess the extent to which the agencies solicited offers from multiple vendors. We conducted this performance audit from March 2013 through March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Of the 62 contracts included in our sample, we determined that 34 were awarded using the unusual and compelling urgency exception to competition. Table 3 below provides an overview of the contracts in our sample including the awarding agency, a description of the item procured and the total dollars obligated. In addition, the table highlights the circumstances that, according to the Departments of Defense (DOD) and State, and the US Agency for International Development (USAID), led to use of the urgency exception such as to avoid a gap in program support or meet an urgent operational need. Figure 7 highlights the items procured and the estimated versus actual duration of the 10 contracts we found with a period of performance of more than 1 year. In addition to the contact named above, Tatiana Winger, Assistant Director and Candice Wright, analyst-in-charge, managed this review. MacKenzie Cooper, Beth Reed Fritts, and Erin Stockdale made significant contributions to the work. Julia Kennon and Alyssa Weir provided FPDS-NG data analysis expertise and legal support, respectively. Roxanna Sun provided graphics support. | Competition is a critical tool for achieving the best return on the government's investment. Federal agencies are generally required to award contracts competitively but are permitted to award noncompetitive contracts under certain circumstances, such as when requirements are of such an unusual and compelling urgency that the government would suffer serious financial or other injury. Contracts that use the urgency exception to competition must generally be no longer than one year in duration. The conference report for the National Defense Authorization Act of Fiscal Year 2013 mandated GAO to examine DOD's, State's, and USAID's use of this exception. For the three agencies, GAO assessed (1) the pattern of use, (2) the reasons agencies awarded urgent noncompetitive contracts and the extent to which justifications met FAR requirements; and (3) the extent to which agencies limited the duration. GAO analyzed federal procurement data, interviewed contracting officials, and analyzed a non-generalizable sample of 62 contracts with a mix of obligation levels and types of goods and services procured across the three agencies. The Departments of Defense (DOD) and State and the U.S. Agency for International Development (USAID) used the urgency exception to a limited extent, but the reliability of some federal procurement data elements is questionable. For fiscal years 2010 through 2012, obligations reported under urgent noncompetitive contracts ranged from less than 1 percent to about 12 percent of all noncompetitive contract obligations. During that time, DOD obligated $12.5 billion noncompetitively to procure goods and services using the urgency exception, while State and USAID obligated $582 million and about $20 million respectively, almost exclusively to procure services. Among the items procured were personal armor, guard services and communications equipment to support missions in Afghanistan and Iraq. GAO found coding errors that raise concerns about the reliability of federal procurement data on the use of the urgency exception. Nearly half—28 of the 62 contracts in GAO's sample—were incorrectly coded as having used the urgency exception when they did not. GAO found that 20 of the 28 miscoded contracts were awarded using procedures that are more simple and separate from the requirements related to the use of the urgency exception. Ensuring reliability of procurement data is critical as these data are used to inform procurement policy decisions and facilitate oversight. For the 34 contracts in GAO's sample that were properly coded as having used the urgency exception, agencies cited a range of urgent circumstances, primarily to meet urgent needs for combat operations or to avoid unanticipated gaps in program support. The justifications and approvals—which are required by the Federal Acquisition Regulation (FAR) to contain certain facts and rationale to justify use of the urgency exception to competition—generally contained the required elements; however, some were ambiguous about the specific risks to the government if the acquisition was delayed. Ten of the 34 contracts in GAO's sample had a period of performance of more than one year—8 of which were modified after award to extend the period of performance beyond 1 year. The FAR limits contracts using the urgency exception to one year in duration unless the head of the agency or a designee determines that exceptional circumstances apply. Agencies did not make this determination for the 10 contracts. The FAR is not clear about what steps agencies should take when a contract is modified after award to extend the period of performance over 1 year. Some contracting officials noted that these modifications are treated as separate contract actions and would not require the determination by the head of the agency or designee. Others considered them cumulative actions requiring the determination. Standards for Internal Controls in the Federal Government calls for organizations to maintain proper controls that ensure transparency and accountability for stewardship of government resources. The Office of Federal Procurement Policy (OFPP)—which provides governmentwide policy on federal contracting procedures—is in a position to clarify when the determination of exceptional circumstances is needed to help achieve consistent implementation of this requirement across the federal government. Further, under the urgency exception, the FAR requires agencies to seek offers from as many vendors as practicable given the circumstances. For some contracts in GAO's sample, lack of access to technical data rights and reliance on contractor expertise prevented agencies from obtaining competition. GAO recommends that DOD, State and USAID provide guidance to improve data reliability and oversight for contracts awarded using the urgency exception. GAO also recommends that OFPP provide clarifying guidance to ensure consistent implementation of regulations. Agencies generally agreed with the recommendations. |
FFMIA is part of a series of financial management reform legislation enacted since the early 1980s. This series of legislation started with the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), which the Congress passed to strengthen internal controls and accounting systems throughout the federal government, among other purposes. Issued pursuant to FMFIA, the Comptroller General’s Standards for Internal Control in the Federal Government provides the standards that are directed at helping agency managers implement effective internal control, an integral part of improving financial management systems. Internal control plays a major role in managing an organization and comprises the plans, methods, and procedures used to meet missions, goals, and objectives. In summary, internal control helps government program managers achieve desired results through effective management of public resources. Effective internal control also helps in managing change to cope with shifting environments and evolving demands and priorities. As programs change and agencies strive to enhance operational processes and implement new technological developments, management must continually assess and evaluate its internal control to ensure that the control activities being used are effective and updated when necessary. While agencies had achieved some early success in identifying and correcting material internal control and accounting system weaknesses, their efforts to implement FMFIA had not produced the results intended by the Congress and sufficiently improved general and financial management in the federal government. Therefore, beginning in the 1990s, the Congress passed additional management reform legislation to achieve these management improvements in the federal government. This legislation includes the (1) CFO Act of 1990, (2) Government Performance and Results Act of 1993, (3) Government Management Reform Act of 1994, (4) FFMIA, (5) Clinger-Cohen Act of 1996, (6) Accountability of Tax Dollars Act of 2002, (7) Improper Payments Information Act of 2002, (8) Federal Information Security Management Act of 2002 (FISMA), and (9) Department of Homeland Security Financial Accountability Act of 2004. The combination of reforms ushered in by these laws, if successfully implemented, provides a solid foundation to improve the accountability of government programs and operations as well as to routinely produce valuable cost and operating performance information. These financial management reform laws reflect the importance of improving financial management of the federal government. Appendixes II through VI include details on the various requirements, guidance, standards, and checklists that support federal financial management. OMB sets governmentwide financial management policies and requirements, as well as guidance related to FFMIA. OMB Circular No. A-127, Financial Management Systems, defines the policies and standards prescribed for executive agencies to follow in developing, operating, evaluating, and reporting on financial management systems. OMB Circular No. A-127 references the series of publications entitled federal financial management systems requirements, issued by the Joint Financial Management Improvement Program (JFMIP) as the primary source of governmentwide requirements for financial management systems. Federal financial management systems requirements, among other things, provide a framework for establishing integrated financial management systems to support program and financial managers. Appendix III lists the series of federal financial management systems requirements published to date. In a January 4, 2001, memorandum, Revised Implementation Guidance for the Federal Financial Management Improvement Act, OMB provided guidance for agencies and auditors to use in assessing substantial compliance. The guidance describes the factors that should be considered in determining whether an agency’s systems substantially comply with FFMIA’s three requirements and provides guidance to agency heads to assist in developing corrective action plans for bringing their systems into compliance with FFMIA. There are examples included in the guidance on the types of indicators that should be used as a basis in assessing whether an agency’s systems are in substantial compliance with FFMIA requirements. In addition, we have worked in partnership with the President’s Council on Integrity and Efficiency (PCIE) to develop and maintain the joint GAO/PCIE Financial Audit Manual (FAM). The FAM presents a methodology that auditors may, but are not required to, use to perform financial statement audits of federal entities in accordance with professional standards and includes sections that provide specific procedures auditors should perform when assessing FFMIA compliance. These sections include guidance and detailed audit steps for testing agency financial management systems’ substantial compliance with the requirements of FFMIA. The FAM guidance on FFMIA assessments recognizes that while financial statement audits offer some assurance on FFMIA compliance, auditors should design and implement additional testing to satisfy FFMIA criteria. Testing for compliance with FFMIA is efficiently accomplished, for the most part, as part of the work done in understanding agency systems in the internal control phase of the audit, and the FAM provides specific guidance on what additional testing should be performed, such as tests of information system controls and nonsampling control tests. The purpose of financial management systems goes beyond providing the data necessary to comply with various financial reporting requirements to focus on routinely producing reliable, useful, and timely financial information that federal managers can use for day-to-day decision-making purposes. Recognizing that decision makers can benefit from a better understanding of the challenges and opportunities associated with federal financial management systems, the Comptroller General convened a forum on improving federal financial management systems in December 2007. According to several participants, producing accurate financial statements should be viewed as a by-product of effective business processes and financial management systems. We have consistently provided this perspective for a number of years in our prior reports. As in prior years, the auditors’ fiscal year 2007 FFMIA assessments for the 24 CFO Act agencies illustrate that many agencies still do not have effective financial management systems, including processes, procedures, and controls in place that can produce reliable, useful, and timely financial information with which to make informed decisions on an ongoing basis. The primary goal of FFMIA is for agencies to improve financial management systems so that financial information from these systems can be used to help manage agency programs more effectively and enhance the ability to prepare auditable financial statements. Auditors reported a change in their FFMIA assessment for five agencies’ systems that they determined are no longer in substantial noncompliance for fiscal year 2007. The auditors noted these agencies took corrective action in the areas of compliance with federal accounting standards and federal financial management systems requirements. However, in light of the significant deficiencies and problems that auditors are still reporting, we remain concerned that criteria for substantial compliance with FFMIA requirements are not well defined or consistently implemented across the 24 CFO Act agencies. Recognizing that decision makers can benefit from a better understanding of the challenges and opportunities associated with federal financial management systems, the Comptroller General convened a forum on December 11, 2007. The forum brought together knowledgeable and recognized financial management leaders from the federal government, including the CFO, Chief Information Officer, and IG communities, and selected other officials with extensive experience in financial management from both the public and private sectors. One of the themes from the forum was that federal financial management leaders should refocus their efforts on comprehending and meeting program managers’ financial information requirements and not simply on meeting financial reporting compliance requirements. Despite agencies’ emphasis on meeting financial reporting compliance requirements, about two thirds of forum participants (21 of 33 respondents) agreed that financial management systems are not able to provide or provide little information that is reliable, useful, and timely to assist managers in their day-to-day decision-making, which is the ultimate goal of FFMIA. Figure 1 shows 19 of the 24 CFO Act agencies received an unqualified opinion on their financial statements in fiscal year 2007. However, for 8 of these 19 agencies, auditors reported that systems did not substantially comply with one or more of the three FFMIA requirements and that significant problems exist, as discussed later. According to auditors, some of these 8 agencies have been able to obtain unqualified audit opinions through extensive labor-intensive efforts, which include using ad hoc procedures, expending significant resources, and making billions of dollars in adjustments to derive financial statements. This is usually the case when agencies have inadequate systems that are not integrated and routinely reconciled. These time-consuming procedures must be combined with sustained efforts to improve agencies’ underlying financial management systems and controls. Forum participants said that producing accurate financial statements should be viewed as a byproduct of effective business processes and financial management systems. We have expressed a similar viewpoint in the past. If agencies continue year after year to rely on costly and time-intensive manual efforts to achieve or maintain unqualified opinions, the Congress and others may be misled as to the true status of agencies’ financial management systems capabilities. While work performed in auditing financial statements would naturally offer some perspective regarding FFMIA compliance, the work necessary to assess substantial compliance of systems with the FFMIA requirements has a complementary but broader focus than that performed for purposes of rendering an opinion on the financial statements. In performing financial statement audits, auditors generally focus on the capability of the financial management systems to process and summarize financial information that flows into the financial statements. For purposes of FFMIA, financial management systems include systems, processes, procedures, and controls that produce the information management uses day to day, not just systems that produce annual financial statements. Thus, according to the FAM, to report on system compliance with FFMIA, the auditor should understand the design of and test, as needed, the financial management systems (including the financial portion of any mixed systems) used for managing financial operations, supporting financial planning, management reporting, budgeting activities, and systems accumulating and reporting cost information. Several forum participants expressed concern that because of the efforts devoted to preparing financial reports and meeting financial reporting compliance requirements, finance organizations have not focused sufficient attention on understanding and meeting the financial management needs of their own program managers. FFMIA was designed to lead to system improvements that would result in agency managers routinely having access to reliable, useful, and timely financial-related information to measure performance and increase accountability throughout the year. If significant adjustments are made at year end for financial statement reporting purposes, then management has more than likely been operating with inaccurate data throughout the year. According to auditors, the majority of federal agencies’ financial management systems are still not in substantial compliance with FFMIA requirements. As shown in figure 2, auditors reported that 13 of the 24 CFO Act agencies’ systems did not substantially comply with one or more of the three FFMIA requirements for fiscal year 2007. This compares with 17 agencies whose systems were reported as not substantially compliant with FFMIA requirements for fiscal year 2006. Based on our review of the fiscal year 2007 audit reports for the 13 agencies reported to have systems not in substantial compliance with one or more of FFMIA’s three requirements, noncompliance with federal financial management systems requirements was the most frequently auditor-cited deficiency of the three FFMIA requirements. Auditors for six CFO Act agencies reported a change in the FFMIA assessment for fiscal year 2007. The auditor for one agency, EPA, changed its FFMIA assessment from no instances of substantial noncompliance in fiscal year 2006, to not being in substantial compliance with FFMIA requirements for fiscal year 2007 because of problems related to security over certain information systems. The auditors for five agencies reported agencies’ financial management systems were no longer in substantial noncompliance. As discussed later, auditors for these five agencies noted that the agencies took corrective action in the areas of federal accounting standards and federal financial management systems requirements. However, as we have previously reported, we are still concerned that criteria for substantial compliance are not well defined or consistently implemented across the 24 CFO Act agencies. In light of the significant deficiencies and problems that auditors are still reporting, it appears that agencies and auditors may be interpreting OMB’s January 4, 2001, FFMIA guidance to mean that if an agency has no material weaknesses, it is in substantial compliance with the three FFMIA requirements. Further, we caution that the number of agencies reported as noncompliant may be even greater because all but one auditor provided negative assurance. As we have previously reported, when auditors express negative assurance, the auditors are not saying that they determined the systems to be substantially compliant, but that the work performed did not identify instances of noncompliance. Therefore, the auditors may not have identified all instances of noncompliance with FFMIA requirements and included all problems in their reports. Based on our review of the fiscal year 2007 audit reports for the 13 agencies reported to have systems not in substantial compliance with one or more of FFMIA’s three requirements, noncompliance with federal financial management systems requirements was the deficiency auditors most frequently cited of the three FFMIA requirements. To better understand the underlying issues regarding agencies’ noncompliance with federal financial management systems requirements, we divided this requirement into four problem areas including nonintegrated systems, inadequate reconciliation procedures, lack of accurate and timely recording, and weak security over information systems. These four problem areas related to federal financial management systems requirements, plus the two areas related to noncompliance with the SGL and lack of adherence to federal accounting standards result in six problem areas. Figure 3 shows the number of agencies with problems reported in each of the six areas for fiscal year 2007. The weaknesses reported by the auditors ranged from serious, pervasive systems problems to less serious problems that may affect only one aspect of an agency’s accounting operation. While at some agencies, the problems were so serious that they affected the auditor’s opinion on the agency’s financial statements, at other agencies, the auditors cited problems that represented significant deficiencies in the design or operation of internal control, but were determined not to be material to the financial statements taken as a whole. Table 1 illustrates the potential effect these six types of problems can have on an agency’s financial management. For example, the auditor for the Department of the Treasury reported that IRS personnel rely on resource- intensive compensating procedures to prepare its financial statements in a timely manner because of serious internal control and financial management systems deficiencies. These challenges affect IRS’s ability to fulfill its responsibilities as the nation’s tax collector because its managers lack accurate, useful, and timely financial information and sound controls with which to make fully informed decisions day-to-day and to ensure ongoing accountability. For fiscal year 2007, auditors for five agencies no longer reported a lack of substantial compliance with FFMIA requirements. While auditors reported that improvements at those five agencies were because of agency- implemented corrective actions, in some cases, it appears that varying interpretations of OMB’s FFMIA guidance on the definition of “substantial compliance” may have played a role. The implementation guidance provides indicators of substantial compliance, such as whether an agency’s “audit disclosed no material weaknesses in internal control that affect the agency’s ability to prepare financial statements and related disclosures.” However, this indicator only addresses the federal accounting standards requirement of FFMIA, not the federal financial management systems or the SGL requirement. We are concerned that auditors are interpreting OMB’s FFMIA implementation guidance to mean that if an agency has no material weaknesses in controls over the area of financial reporting, it is compliant with FFMIA. For example, the auditor for Justice told us that the reduction of certain material weaknesses to significant deficiencies was a factor for the change in its FFMIA assessment of substantial compliance. Based primarily on the information contained in the agencies’ performance and accountability reports, the following summarizes how auditors determined that five agencies were no longer substantially noncompliant in fiscal year 2007. Department of Energy—The Department of Energy’s auditor reported one material weakness related to FFMIA noncompliance in the area of federal accounting standards at the end of fiscal year 2006. For 2006, the auditor reported that the department did not properly account for obligations and undelivered orders, which affected the accuracy, validity, and completeness of these account balances. During fiscal year 2007, the department reported it took corrective actions including, but not limited to, improving several reports and related reconciliation processes. Because of these efforts, the auditor reported that the corrective actions related to the material weakness on obligations and undelivered orders were fully implemented and considered the finding closed in fiscal year 2007. In fiscal year 2007, Energy had a significant deficiency related to network vulnerabilities and weaknesses in access and other security controls in the department’s unclassified computer information systems. The auditor concluded that its tests disclosed no instances in which the department’s financial management systems did not substantially comply with the three FFMIA requirements for fiscal year 2007. Department of the Interior—At the end of fiscal year 2006, the Department of the Interior’s auditor reported FFMIA-related findings in the area of federal accounting standards, that resulted in the department not being in substantial compliance with FFMIA requirements. The findings included a material weakness related to controls over Indian Trust Funds and a reportable condition on the improper disclosure of the condition of museum collections. According to management, Interior implemented corrective actions during fiscal year 2007, including closing 9,400 probate cases and deploying the Trust Asset and Accounting Management System. As a result, the auditor reduced the Indian Trust Fund finding to a significant deficiency and closed the museum collection finding entirely in fiscal year 2007. While the department invested a significant amount of resources to improve its controls over Indian Trust Funds, the auditor noted that Interior needs to continue its efforts to resolve historical differences and to improve procedures and internal controls for entering and maintaining Trust Fund information. In addition, a repeat significant deficiency was reported by the auditor on general and application controls over financial management systems. The auditor stated that the previously mentioned findings were not significant enough to warrant concluding that the department was substantially noncompliant with FFMIA requirements for fiscal year 2007. Department of Justice—Justice’s auditor reported FFMIA-related findings in the area of federal financial management systems requirements that resulted in the department not being in substantial compliance with FFMIA requirements at the end of fiscal year 2006. One of these findings was a repeat material weakness related to the department’s financial management systems’ general and application controls. According to the auditor, during fiscal year 2007, three out of the four components that had long-standing material weaknesses in this area—United States Marshals Service, Office of Justice Programs, and the Bureau of Alcohol, Tobacco, Firearms and Explosives—made enough improvements to internal controls over their information system environment to reduce the finding from a material weakness to a significant deficiency. Some of the reported corrective actions included increasing security awareness and training, and implementing stronger password setting policy. In its commentary and summary of Justice’s annual financial statement for fiscal year 2007, the Justice IG made the following comment about Justice’s financial system environment: “Inadequate, outdated, and in some cases non-integrated financial management systems do not provide certain automated financial transaction processing activities that are necessary to support management’s need for timely and accurate financial information throughout the year.” In the IG’s 2007 list of top management and performance challenges facing Justice, the IG also reported that “the Department’s efforts over the past few years to implement the Unified Financial Management System (UFMS) to replace the seven major accounting systems currently used throughout the Department have been subject to fits and starts. Three years after the Department selected a vendor for the unified system it has made little progress in deploying the UFMS. The Department notes that problems with funding, staff turnover, and other competing priorities have caused the delays in implementing the UFMS. Until that time, Department-wide accounting information will have to continue to be produced manually, a costly process that undermines the Department’s ability to prepare financial statements that are timely and in accordance with generally accepted accounting principles. Furthermore, the Federal Bureau of Investigation and United States Marshals Service will not be able to achieve compliance with the FFMIA requirement to record all activity at the United States Standard General Ledger transaction level until the UFMS has been fully implemented.” We also noted that all nine of Justice’s components still had at least one significant deficiency or material weakness related to general and application controls, and five out of the nine components had findings related to a second significant deficiency on improving internal controls to ensure that transactions are properly recorded, processed, and summarized to permit the preparation of financial statements in accordance with generally accepted accounting principles. Justice’s auditor concluded that its tests disclosed no instances in which the department’s financial management systems did not substantially comply with the three requirements of FFMIA for fiscal year 2007. Department of Labor—Labor’s auditor reported FFMIA-related reportable conditions in the area of federal financial management systems requirements, which resulted in the department not being in substantial compliance with FFMIA requirements at the end of fiscal year 2006. These reportable conditions related to lack of strong application controls over access to and protection of financial information, lack of strong logical security controls to secure Labor’s networks and information, and weaknesses noted in the change control process for a benefits system. In addition, Labor’s fiscal year 2006 FISMA report identified a significant deficiency related to a mixed system. According to management, Labor pursued an aggressive remediation process during fiscal year 2007 by revising computer security guidance and performing access controls testing and evaluation for all major information systems. Labor’s auditors noted improvements in the areas of general computer controls related to a Labor benefits system, controls over the mixed system cited in the fiscal year 2006 FISMA report, and updated policies and procedures. For fiscal year 2007, auditors for Labor reported 2 prior year reportable conditions as one significant deficiency related to the lack of adequate controls over access to key financial and support systems. Specifically, the auditor noted issues that were present in multiple financial systems across the department such as inactive accounts were not disabled or deleted in a timely manner; generic accounts existed on systems; and access to sensitive files, directories, or software was not restricted. According to the auditor, each access control issue mentioned presented a reasonably possible chance to adversely affect Labor’s ability to initiate, authorize, record, process, or report financial data. The auditor also reported that these access control weaknesses could lead to users with inappropriate access to financial systems; inefficient processes, lack of completeness, accuracy, or integrity of financial data; and/or the lack of detection of unusual activity within financial systems. As a result of the access control weaknesses identified, the IG reported an access control significant deficiency in conjunction with its testing of compliance with FISMA for fiscal year 2007. However, the IG’s fiscal year 2007 FISMA report did not communicate significant deficiencies for specific systems; instead, it reported significant deficiencies by control type, grouping all impacted systems together in each deficiency. Labor’s auditors stated, as a result, they could not determine the severity of deficiencies for any individual financial or mixed system. Labor’s auditor concluded that the department complied, in all material respects, with the requirements of FFMIA as of September 30, 2007. Small Business Administration (SBA)—At the end of fiscal year 2006, auditors reported that SBA was not substantially compliant with FFMIA requirements in the area of federal financial management systems requirements. The related finding involving weak information technology security controls was characterized as a reportable condition. During fiscal year 2007, the auditor noted improvements in formalizing policies and procedures over granting users emergency access, documenting reviews of remote users, and formalizing continuity of operations plans. Despite the identified improvements, the auditors continued to report issues in the areas of security access controls, software program changes, and end-user computing and reported the condition as a significant deficiency. For fiscal year 2007, the auditor reported no instances in which the department’s financial management systems did not substantially comply with the three requirements of FFMIA. We have previously reported that auditors have expressed a need for clarification on the definition of “substantial compliance” with FFMIA. Further, when asked to what extent agreement exists on the definition of substantial compliance with FFMIA, 20 of 35 participants at the December 2007 Comptroller General forum stated that agreement exists to little or no extent while the other 15 of 35 believed agreement exists to a moderate extent. We initially recommended that OMB clarify its guidance on the meaning of substantial compliance in our report covering FFMIA fiscal year 2000 results and have reiterated this recommendation thereafter with OMB action starting this year. In prior years, OMB has responded that it has been focusing on various initiatives, and it agreed to consider clarifying the definition of “substantial compliance” in future policy and guidance updates. Last year OMB stated that in its update to Circular No. A-127, Financial Management Systems, its goal would be to simplify FFMIA compliance requirements as well as to better balance the FFMIA objectives of generating audited financial statements and providing meaningful information for decision makers. Accordingly, OMB agreed to take this recommendation under advisement and is currently revising its guidance. As we have previously reported, without a consistent agreement and application of a common definition of substantial compliance, the status of agency financial management systems’ compliance remains uncertain. Although OMB’s January 4, 2001, FFMIA implementation guidance includes examples of compliance indicators, we found in the past that several agency auditors used the indicators as a checklist for determining an agency’s systems compliance. In our view, a checklist approach is inappropriate for assessing the substantial compliance of agency systems, including processes, procedures, and controls with FFMIA. This approach also does not meet the expectations of the Congress in requiring the auditor to insist on rigorous adherence to the accounting standards in reporting whether the agency’s financial management systems substantially comply with the three requirements of FFMIA. Congress also expected that the audit community would discharge this compliance function consistent with established practices of the profession and the exercise of sound professional judgment. A comprehensive approach that considers key systems’ functionalities, such as tests of information system controls and nonsampling control tests, is essential for auditors to obtain assurance that the agencies’ systems provide reliable, useful, and timely information for decision makers on an ongoing basis throughout the year and not just for preparing year-end financial statements. OMB’s guidance lays out the key factors that auditors should consider when assessing whether an agency’s systems are substantially compliant. OMB’s guidance calls for auditors to use their professional judgment when considering factors such as providing reliable and timely information for managing current operations; accounting for assets so they can be properly protected from loss, misappropriation, or destruction; and whether a system’s performance prevents an agency from meeting specific FFMIA requirements. Nonetheless, because auditors have focused their consideration of FFMIA substantial compliance on issues related to the financial statement audit, it is important that the meaning of substantial compliance be clarified and refocused to include other aspects in addition to financial statement audit results. While we agree that the use of professional judgment is critical, we continue to believe that a consensus is needed on what constitutes substantial compliance. In regard to our previous recommendation that OMB explore further clarification of the definition of “substantial compliance,” OMB is in the process of revising OMB Circular No. A-127 and its FFMIA implementation guidance. In May 2008, OMB issued a draft Circular No. A-127 for CFO Council review and comment. In our comments on the draft Circular No. A-127, we reemphasized our concerns with, among other things, the need for an appropriate definition of substantial compliance that focuses on financial management systems’ capabilities beyond financial statement preparation. OMB is considering the comments received and had not issued a public draft as of August 2008, but is planning to finalize the guidance in October 2008. Auditors’ FFMIA assessments pointed out that many of the CFO Act agencies have significant problems with the financial management systems in use today. For agencies to achieve FFMIA compliance, they need to implement systems that give them reliable, useful, and timely information and do so using disciplined processes. The modernization of federal financial management systems has been a long-standing challenge at many federal agencies. Past systems implementation attempts have failed to deliver the promised capability on time and within budget. For example, we reported in September 2004 that HHS did not effectively implement the best practices needed to reduce the risks associated with the implementation of a new system. Three years later, auditors report HHS continues to experience problems converting to the system. In part, to combat the past failures of individual agencies’ efforts, OMB developed the FMLOB initiative, which involves standardizing business processes and data elements governmentwide, and leveraging common solutions through a competitive environment of shared service providers to which agency financial management systems can be migrated. The initiative is intended to enable seamless data integration across agencies and avoid costly and redundant investment in “in-house” financial management system solutions. Although OMB continues to make progress on this initiative and priorities have been developed to focus efforts through December 2009, some aspects of the initiative have taken longer than OMB expected. We have previously reported concerns with this initiative, such as the lack of a concept of operations and need for a clear plan for migrating agencies to shared service providers. Similarly, participants at the forum expressed uncertainties about the previous goal for migrating the majority of agencies to a shared service provider by 2011. One of our ongoing and consistent reporting themes has been that the modernization of federal financial management systems has been a long- standing challenge at many federal agencies across the government. While the development of a financial management system can never be risk free, effective implementation of best practices in systems development and implementation efforts (commonly referred to as disciplined processes) can reduce those risks to acceptable levels. Nevertheless, agency efforts far too often result in systems that do not meet their cost, schedule, and performance goals. While agencies anticipate that the new systems will provide reliable, useful, and timely data to support managerial decision making, our work and that of others has shown that has often not been the case. For example, modernization efforts at DOD, HHS, and DHS have been hampered by agencies not following disciplined processes. As we reported in July 2007, the Army’s approach for investing about $5 billion over the next several years in its General Fund Enterprise Business System, Global Combat Support System-Army Field/Tactical, and Logistics Modernization Program did not include alignment with the Army enterprise architecture or use of a portfolio-based business system investment review process. Moreover, we reported that the Army’s lack of a concept of operations has contributed to its failure to take full advantage of business process reengineering opportunities that are available when using an enterprise resource planning solution. Further, the Army did not have reliable processes—such as an independent verification and validation function, or analyses, such as economic analyses—to support its management of these programs. We concluded that until the Army adopts a business system investment management approach that provides for reviewing groups of systems and making enterprise decisions on how these groups will collectively interoperate to provide a desired capability, it runs the risk of investing significant resources in business systems that do not provide the desired functionality and efficiency. As we previously reported in September 2004, HHS did not follow key disciplined processes necessary to reduce the risks associated with implementing the Unified Financial Management System (UFMS) to acceptable levels. We identified problems in such key areas as requirements management, including developing a concept of operations, data conversion, and risk management. Three years later, in fiscal year 2007, HHS’s auditors reported that serious financial system issues continue as a result of conversion problems. For example, over 800 entries, exceeding $170 billion, had to be manually recorded into the UFMS system; more than $1 billion in transactions were inappropriately posted; and a cumbersome, manual process is used to compile its financial statements. Sustained efforts will be necessary to overcome these continuing serious weaknesses. In June 2007, we reported that DHS lacked a financial management strategy that included a formal strategic financial management plan to implement or migrate to an integrated system. In addition, DHS’s concept of operations did not contain an adequate description of the legacy systems and a clear articulation of the vision that should guide the department’s improvement efforts, and key requirements developed for the project were unclear and incomplete. Since then, DHS has developed a strategy to consolidate the department’s financial systems down to two platforms—SAP and Oracle. However, according to a recent DHS IG report, DHS did not perform a complete analysis of all potential systems and service providers as part of its process to select a financial systems solution. As a result of a bid protest, in a March 17, 2008, ruling, the United States Court of Federal Claims held that DHS’s sole source procurement for financial systems application software had violated a provision in the Competition in Contracting Act requiring full and open competition using competitive procedures and required DHS to conduct a competitive procurement. In response to this decision, DHS is revisiting its financial systems consolidation strategy. As illustrated by these examples, more discipline is needed in implementation efforts to avoid the problems that can occur when best practices are not followed. Similarly, participants at the forum stated that it is time to start putting into practice the lessons learned from previous implementation efforts. As part of an effort to begin confronting these challenges, forum participants offered a range of perspectives, insights, and examples. Experience related to financial management, human capital management, systems ownership, customization of commercial off-the- shelf software, and the purchase of shared services has provided useful insights that should help financial managers avoid some of the obstacles that impeded past projects. Consistent with our long-held views, financial managers at the forum also identified various useful system implementation practices, including conducting independent verification and validation, and periodically reevaluating system implementation projects. In March 2004, OMB launched the FMLOB initiative, in part, to improve the outcome of financial management system implementations so that agencies have systems that ensure ongoing accountability and generate reliable, useful, and timely information for decision-making purposes emphasized by FFMIA. Since then, OMB and FSIO have continued to make gradual progress toward achieving FMLOB goals by issuing a common governmentwide accounting classification structure, a financial services assessment guide, and standard business processes for funds and payment management, as well as developing other planning tools designed to leverage these standards and shared solutions. However, additional efforts are needed to address recommendations included in our March 2006 report regarding key aspects of this initiative, such as developing an overall concept of operations, identifying and defining additional standard business processes, and ensuring that agencies do not continue to develop and implement their own stove-piped systems. We are currently working with OMB to gain a more in-depth understanding of FMLOB-related efforts and progress toward addressing these recommendations. The following provides an overview of the status of OMB’s efforts and concerns identified in these areas. Developing a concept of operations. A concept of operations would provide a useful tool to explain how financial management systems can operate cohesively in conjunction with other related systems and to help minimize an agency’s individualized, stove-piped efforts. Participants attending the forum confirmed our concerns regarding the need for this important tool, pointing out that OMB’s various lines of business initiatives are serving to preserve existing stovepipes. For example, participants said it is unclear why separate lines of business are needed for budget and financial management. Although OMB officials told us that a draft concept of operations document is currently under review, the extent to which these concerns will be addressed is unclear. Identifying and implementing standard business processes. In a January 2008 memo summarizing FMLOB efforts and priorities, OMB recognized the risk associated with implementing business processes that are not standardized across government as well as the need to develop additional guidance and other tools. Specifically, the memo states that once business standards have been completed, incorporated into core financial system requirements, and tested during the FSIO software qualification and certification process, shared service providers will only be permitted to utilize the certified products as configured. To date, two standard business processes have been issued, and OMB expects three additional standard business processes to be finalized in December 2008. Recognizing the need to further develop FMLOB guidance and tools, OMB identified priorities for the remainder of 2008 and 2009, which include identifying and developing additional business standards (e.g., interface data elements), expanding migration planning guidance, finalizing and developing cost and performance measurements related to FMLOB business standards, incorporating these standards into core financial systems requirements and software, and updating related testing methodology and scenarios. Establishing a clear migration path. To ensure that agencies migrate to a shared service provider in accordance with OMB’s stated approach rather than attempt to develop and implement their own stove-piped business systems, we previously recommended that OMB establish a clear migration path or timetable for future migrations. OMB estimates for when migrations will be completed are evolving and no firm timeframes have been set. OMB’s general guidance is that agencies should migrate to a shared service provider when it is cost effective to do so and they have maximized the return on investment in the current system. Although OMB previously established a goal of migrating the majority of agencies toward the use of shared service providers by 2011, more recent information indicates that agency migrations will take longer than OMB expected. For example, participants attending the forum appeared uncertain regarding the ability of their respective organizations to reach this goal by 2011. OMB’s more recent estimate indicates that many migrations are scheduled through fiscal year 2015 and that some agency migrations have not yet been scheduled. Agency migrations to a shared service provider are an important aspect to achieving FMLOB goals. Even without a clear migration path, some agencies may readily migrate to a shared service provider to minimize the tremendous undertaking of implementing or significantly upgrading a financial system. However, other agencies may continue efforts to implement stand-alone systems that place additional resources at risk because of potential financial system implementation failures. Further, other concerns that exist within the federal financial management community include the availability of sufficient resources, the viability of the initiative on a governmentwide basis, and the potential loss of control of critical financial functions. For example, none of the forum participants believed the resources available to implement the initiative are fully adequate. Some participants also indicated that some financial leaders may be reluctant to transition their agencies’ financial management activities on a wholesale basis because of their fear of losing control of critical functions and lack of trust in a shared service providers’ ability to effectively meet their needs. Shifting financial management leaders’ focus from meeting financial reporting compliance requirements to comprehending and meeting program managers’ financial information requirements is key to more meaningful, value-added financial management. Given our nation’s current fiscal environment, reliable financial information for prudent and forward- thinking decision making is imperative. If properly developed, implemented, and managed, financial management systems can provide essential financial data in support of day-to-day managerial decision making—the ultimate goal of FFMIA. To accomplish this goal, CFO Act agencies must continue to strive toward routinely producing not only annual financial statements that can pass the scrutiny of a financial audit, but also other meaningful financial and performance data. Over a decade has passed since the enactment of FFMIA, and the majority of agencies continue to lack financial management systems—including processes, procedures, and controls—that substantially comply with the requirements of the act, even though the majority of agencies are achieving “clean” audit opinions. Consistent and diligent OMB commitment and oversight toward achieving financial management system capabilities and the common goals of the FMLOB initiative and FFMIA are essential. In our view, the indicators included in OMB’s guidance are not a substitute for the rigorous criteria needed for assessing substantial compliance with FFMIA. While we are not making any new recommendations in this report, we will continue to work with OMB to help ensure that it provides agency management and auditors with the guidance needed to bring about reliable and consistent assessments of, and meaningful improvements in, financial management systems as envisioned by FFMIA. Accordingly, we reiterate our prior recommendation for OMB to clarify its guidance on the meaning of “substantial compliance” with FFMIA. Significant and long-standing obstacles remain for developing and implementing effective financial management systems, including processes, procedures, and controls. It is important that emphasis on correcting these deficiencies be sustained by the current administration as well as the new administration that will be taking office next year. Continued congressional oversight will also be crucial in transforming federal financial management systems. We received written comments (reprinted in app. VII) from the Deputy Controller, Office of Federal Financial Management, Office of Management and Budget on a draft of this report. In its comments, OMB agreed with our assessment that federal agencies still need to improve their financial systems so that reliable, useful, and timely financial management information is available for decision making. OMB stated that it was working aggressively to assist agencies in building a strong foundation of financial management practices through OMB’s financial management and systems oversight and under the FMLOB initiative. According to OMB, both efforts support the goals of FFMIA to improve governmentwide financial management and to facilitate timely and reliable information for day-to-day management. While OMB stated that the number of noncompliances with FFMIA was reduced to 10, compared to 12 for the previous year, that number differed from our report findings due to the fact that OMB’s number was based on the assessments made by the 24 CFO Act agency heads rather than by the independent auditors as we reported. With regard to our prior recommendation for guidance that clarifies the definition of substantial compliance, OMB has begun a significant re-write of OMB Circular No. A-127, Financial Management Systems, as well as an update to OMB’s implementation guidance for FFMIA. OMB stated the re- write of OMB Circular No. A-127 will clarify the definition of substantial compliance so that auditors and agency heads interpret the guidance more consistently. We will continue to work with OMB by providing comments and recommendations on the draft so that a clear definition of substantial compliance with FFMIA is developed and to address other concerns. OMB also provided technical comments on a draft of this report that we incorporated as appropriate. In addition, we also received technical comments from several agencies cited in the report and incorporated them as appropriate. We are sending copies of this report to the Chairman and Ranking Member, Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Senate Committee on Homeland Security and Governmental Affairs, and to the Chairman and Ranking Member, Subcommittee on Government Management, Organization, and Procurement, House Committee on Oversight and Government Reform. We are also sending copies to the Director, Office of Management and Budget; the heads of the 24 CFO Act agencies in our review; and agency CFOs and Inspectors General. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. This report was prepared under the direction of Kay L. Daly, Acting Director, Financial Management and Assurance, who may be reached at (202) 512-9095 or dalykl@gao.gov if you have any questions. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix VIII. We reviewed the fiscal year 2007 financial statement audit reports for the 24 Chief Financial Officer (CFO) Act agencies contained in their performance and accountability reports. We further analyzed and compiled the auditors’ assessments of agency financial systems’ compliance and the problems that affect Federal Financial Management Improvement Act (FFMIA) compliance. We did not re-perform the auditors’ work as it was beyond the scope of this engagement. To determine whether the data were sufficiently reliable, we performed the following procedures. We gained an understanding of the independence and quality control environments at the respective auditors that made the agencies’ FFMIA assessment; leveraged our understanding of the methodology used by the inspectors general (IG) and their contract auditors in past years to reach conclusions on FFMIA compliance at the respective agencies; considered management responses to the auditor’s findings and conclusions; and conducted interviews to improve our understanding of the procedures applied and/or conclusions drawn, where appropriate. We also reviewed the data for obvious inconsistencies or errors, completeness, and changes from the prior year. When we found data which were inconsistent or incomplete we brought them to the attention of the cognizant IG staff or contract auditor and worked with them to resolve any issues before using the data as a basis for this report. When we encountered data that varied from the prior year, we reviewed the performance and accountability report and auditor’s report to determine the reason for the change. We conducted interviews with the auditors and IG staffs, and obtained selected supporting documentation. Based on these actions, we determined that the data from these reports were sufficiently reliable for the purposes of using the work of other auditors in our report. Using the auditors’ reports for 13 of the 24 CFO Act agencies that auditors reported as noncompliant with FFMIA for fiscal year 2007, we identified problems reported by the auditors that affected agency systems’ compliance with FFMIA. The problems identified in these reports are consistent with long-standing financial management weaknesses we have reported based on our work at a number of agencies. Further, we identified other GAO and IG reports that discussed financial management systems issues and analyzed and summarized the reports. In addition, we analyzed the results and information obtained from the recent Comptroller General’s forum on improving the federal government’s financial management systems. We also met with the Office of Management and Budget (OMB) officials to discuss their current efforts to improve federal financial management and address our prior recommendations related to FFMIA. In addition, we reviewed documentation provided by OMB regarding its current initiatives. We conducted this performance audit from December 2007 to September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We requested comments on a draft of this report from the Director, Office of Management and Budget, or his designee. We received written comments from the Deputy Controller. OMB’s comments are discussed in the Agency Comments and Our Evaluation section and reprinted in appendix VII. We also received technical comments from OMB, which we incorporated as appropriate. In addition, we provided relevant excerpts from a draft of this report to agencies specifically cited in the report. Several agencies provided technical comments which we incorporated as appropriate. Congress enacted the Federal Financial Management Improvement Act (FFMIA) in 1996 to obtain the benefits of effective financial management of the federal government that would flow from enforced implementation of three earlier 1990s financial management developments. First, the Chief Financial Officers (CFO) Act of 1990 (CFO Act), as expanded by the Government Management Reform Act of 1994, initiated significant financial management reform at 24 major agencies by establishing a centralized agency financial management leadership structure and imposing financial discipline through required annual agencywide audited financial statements. Second, the Joint Financial Management Improvement Program (JFMIP) in 1995 issued revised Core Financial System Requirements, which set out the functional and technical requirements for an agency’s core financial system. Third, the Federal Accounting Standards Advisory Board (FASAB), which was established in 1990, had made significant progress after 6 years of work in developing the federal government’s first set of comprehensive financial accounting standards and concepts designed to meet the needs of federal agencies and users of federal financial information. Moreover, FFMIA requires implementation of the U.S. Government Standard General Ledger (SGL). The SGL is intended to improve data stewardship throughout the federal government enabling consistent reporting at all levels within the agencies and providing comparable data and financial analysis governmentwide. Even with these improvements, the Senate Committee on Governmental Affairs, which considered the legislation resulting in FFMIA, stated that federal agencies’ financial management systems were inadequate and could be improved by creating a means to use the audit process established by the CFO Act to assure that federal agencies would implement and maintain financial management systems that use the applicable federal financial management systems requirements and federal accounting standards. The policies and standards prescribed for executive agencies to follow in developing, operating, evaluating, and reporting on financial management systems are defined in the OMB Circular No. A-127, Financial Management Systems. The components of an integrated financial management system include the core financial system, managerial cost accounting system, administrative systems, and certain programmatic systems. Administrative systems are those that are common to all federal agency operations, and programmatic systems are those needed to fulfill an agency’s mission. OMB Circular No. A-127 refers to the series of publications entitled federal financial management systems requirements, initially issued by the Joint Financial Management Improvement Program’s (JFMIP) Program Management Office as the primary source of governmentwide requirements for financial management systems. However, as of December 2004, the Financial Systems Integration Office (FSIO) assumed responsibility for coordinating the work related to federal financial management systems requirements and OMB’s Office of Federal Financial Management (OFFM) is responsible for issuing the new or revised regulations. In December 2004, the JFMIP Principals—the Comptroller General of the United States, the Secretary of the Treasury, and the Directors of OMB and the Office of Personnel Management—voted to modify the roles and responsibilities of JFMIP, resulting in the creation of FSIO. Appendix III lists the federal financial management systems requirements published to date. Figure 4 is the current model that illustrates how these systems interrelate in an agency’s overall systems architecture. FASAB promulgates federal accounting standards and concepts that agency chief financial officers use in developing financial management systems and preparing financial statements. FASAB develops the appropriate federal accounting standards and concepts after considering the financial and budgetary information needs of the Congress, executive agencies, and other users of federal financial information and comments from the public. FASAB forwards the standards and concepts to the Comptroller General, the Director of OMB, the Secretary of the Treasury, and the Director of the Congressional Budget Office for a 90-day review. If, within 90 days, neither the Comptroller General nor the Director of OMB objects to the standard or concept, then it is issued and becomes final. FASAB announces finalized concepts and standards in The Federal Register. The American Institute of Certified Public Accountants designated the federal accounting standards promulgated by FASAB as being generally accepted accounting principles for the federal government. This recognition enhances the acceptability of the standards, which form the foundation for preparing consistent and meaningful financial statements both for individual agencies and the government as a whole. Currently, there are 32 Statements of Federal Financial Accounting Standards (SFFAS) and 5 Statements of Federal Financial Accounting Concepts (SFFAC). The concepts and standards are the basis for OMB’s guidance to agencies on the form and content of their financial statements and for the government’s consolidated financial statements. Appendix IV lists the concepts, standards, interpretations, and technical bulletins, along with their respective effective dates. FASAB’s Accounting and Auditing Policy Committee (AAPC) assists in resolving issues related to the implementation of federal accounting standards. AAPC’s efforts result in guidance for preparers and auditors of federal financial statements in connection with implementation of federal accounting standards. To date, AAPC has issued nine technical releases, which are listed in appendix V along with their release dates. The SGL was established by an interagency task force under the direction of OMB and mandated for use by agencies in OMB and Treasury regulations in 1986. The SGL promotes consistency in financial transaction processing and reporting by providing a uniform chart of accounts and pro forma transactions used to standardize federal agencies’ financial information accumulation and processing throughout the year, enhance financial control, and support budget and external reporting, including financial statement preparation. The SGL is intended to improve data stewardship throughout the federal government, enabling consistent reporting at all levels within the agencies and providing comparable data and financial analysis governmentwide. SFFAS No. 21 Reporting Corrections of Errors and Changes in Accounting Principles, Amendment of SFFAS 7, Accounting for Revenue and Other Financing Sources SFFAS No. 22 Change in Certain Requirements for Reconciling Obligations and Net Cost of Operations, Amendment of SFFAS 7, Accounting for Revenue and Other Financing Sources SFFAS No. 23 Eliminating the Category National Defense Property, Plant, and Equipment SFFAS No. 24 Selected Standards for the Consolidated Financial Report of the United States Government SFFAS No. 25 Reclassification of Stewardship Responsibilities and Eliminating the Current Services Assessment SFFAS No. 26 Presentation of Significant Assumptions for the Statement of Social Insurance: Amending SFFAS 25 SFFAS No. 27 Identifying and Reporting Earmarked Funds SFFAS No. 28 Deferral of the Effective Date of Reclassification of the Statement of Social Insurance: Amending SFFAS 25 and 26 SFFAS No. 29 Heritage Assets and Stewardship Land SFFAS No. 30 Inter-Entity Cost Implementation Amending SFFAS 4, Managerial Cost Accounting Standards and Concepts SFFAS No. 31 Accounting for Fiduciary Activities SFFAS No. 32 Consolidated Financial Report of the United States Government Requirements: Implementing Statement of Federal Financial Accounting Concepts 4 “Intended Audience and Qualitative Characteristics for the Consolidated Financial Report of the United States Government” In addition to the contact named above, Michael S. LaForge, Assistant Director; F. Abe Dymond, Assistant General Counsel; Rosalinda Cobarrubias; Francine DelVecchio; Tiffany Epperson; Lauren S. Fassler; Jim Kernen; Sheila D. Miller; and Patrick Tobo made key contributions to this report. Financial Management: Long-standing Financial Systems Weaknesses Present a Formidable Challenge. GAO-07-914. Washington, D.C.: August 3, 2007. Federal Financial Management: Critical Accountability and Fiscal Stewardship Challenges Facing Our Nation. GAO-07-542T. Washington, D.C.: March 1, 2007. Financial Management: Improvements Under Way but Serious Financial Systems Problems Persist. GAO-06-970. Washington, D.C.: September 26, 2006. Financial Management: Achieving FFMIA Compliance Continues to Challenge Agencies. GAO-05-881. Washington, D.C.: September 20, 2005. Financial Management: Improved Financial Systems Are Key to FFMIA Compliance. GAO-05-20. Washington, D.C.: October 1, 2004. Financial Management: Recurring Financial Systems Problems Hinder FFMIA Compliance. GAO-04-209T. Washington, D.C.: October 29, 2003. Financial Management: Sustained Efforts Needed to Achieve FFMIA Accountability. GAO-03-1062. Washington, D.C.: September 30, 2003. Financial Management: FFMIA Implementation Necessary to Achieve Accountability. GAO-03-31. Washington, D.C.: October 1, 2002. Financial Management: Effective Implementation of FFMIA Is Key to Providing Reliable, Useful, and Timely Data. GAO-02-791T. Washington, D.C.: June 6, 2002. Financial Management: FFMIA Implementation Critical for Federal Accountability. GAO-02-29. Washington, D.C.: October 1, 2001. Financial Management: Federal Financial Management Improvement Act Results for Fiscal Year 1999. GAO/AIMD-00-307. Washington, D.C.: September 29, 2000. Financial Management: Federal Financial Management Improvement Act Results for Fiscal Year 1998. GAO/AIMD-00-3. Washington, D.C.: October 1, 1999. Financial Management: Federal Financial Management Improvement Act Results for Fiscal Year 1997. GAO/AIMD-98-268. Washington, D.C.: September 30, 1998. Financial Management: Implementation of the Federal Financial Management Improvement Act of 1996. GAO/AIMD-98-1. Washington, D.C.: October 1, 1997. | The ability to produce the financial information needed to efficiently and effectively manage the day-today operations of the federal government and provide accountability to taxpayers continues to be a challenge for many federal agencies. To help address this challenge, the Federal Financial Management Improvement Act of 1996 (FFMIA) requires the 24 Chief Financial Officers (CFO) Act agencies to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) federal accounting standards, and (3) the U.S. Government Standard General Ledger (SGL). FFMIA also requires GAO to report annually on the implementation of the act. This report, primarily based on GAO and inspectors general reports and agencies' performance and accountability reports, discusses (1) the reported status of agencies' systems compliance with FFMIA and overall federal financial management improvement efforts and (2) the remaining challenges to achieving the goals of FFMIA. For fiscal year 2007, auditors reported 13 of 24 CFO Act agencies' financial management systems were not in substantial compliance with FFMIA requirements. For these 13 agencies, auditors reported a number of problems, as shown below, that illustrate how agency financial management systems-- including processes, procedures, and controls--are not providing reliable, useful, and timely information to help manage agency programs more effectively. As discussed in prior FFMIA reports, GAO remains concerned that the criteria for assessing substantial compliance with FFMIA are not well defined or consistently implemented across agencies. In addition, the majority of participants at a Comptroller General's forum on improving federal financial management systems said there is little agreement on the definition of "substantial compliance." To address GAO's prior recommendation, OMB is in the process of revising its guidance, and GAO has reemphasized its concerns with the need for an appropriate definition of substantial compliance that focuses on financial management systems' capabilities beyond financial statement preparation. Agencies' efforts to implement new systems far too often result in systems that do not meet cost, schedule, and performance goals. Recent modernization efforts by some agencies have been hampered by not following disciplined processes. To help avoid implementation problems, OMB continues to make progress on its financial management line of business initiative, which promotes business-driven, common solutions for agencies to enhance federal financial management, but additional efforts are needed. |
We primarily focused the performance expectations on DHS’s homeland security-related functions. We generally did not identify performance expectations related to DHS’s nonhomeland security functions, although we did identify some performance expectations that relate to these functions. We also did not apply a weight to the performance expectations we developed for DHS, although qualitative differences between the expectations exist. We recognize that these expectations are not time bound, and DHS will take actions to satisfy these expectations over a sustained period of time. Therefore, our assessment of DHS’s progress relative to each performance expectation refers to the progress made by the department during its first 4 years. Our assessment of DHS’s progress relative to each performance expectation is not meant to imply that DHS should have fully achieved the performance expectation by the end of its fourth year. To identify the performance expectations, we examined responsibilities set for the department by Congress, the Administration, and department leadership. In doing so, we reviewed homeland security-related legislation, such as the Intelligence Reform and Terrorism Prevention Act of 2004, the Homeland Security Act of 2002, the Maritime Transportation Security Act of 2002, the Enhanced Border Security and Visa Entry Reform Act of 2002, and the Aviation and Transportation Security Act. We also reviewed DHS appropriations acts and accompanying conference reports for fiscal years 2004 through 2006. We did not consider legislation enacted since September 2006 in developing the performance expectations. To identify goals and measures set by the Administration, we reviewed relevant homeland security presidential directives and executive orders. For the goals and measures set by the department, we analyzed the DHS Strategic Plan, Performance Budget Overviews, Performance and Accountability Reports, and component agencies’ strategic plans. For management areas, we also examined effective practices identified in our prior reports. We analyzed these documents to identify common or similar responsibilities for DHS mission and management areas and synthesized the responsibilities identified in the various documents to develop performance expectations for DHS. We obtained and incorporated feedback from our subject matter experts on these performance expectations. We also provided the performance expectations to DHS for review and incorporated DHS’s feedback. Based primarily on our prior work and DHS IG work, as well as updated information provided by DHS between March and June 2007, we examined the extent to which DHS has taken actions to achieve the identified performance expectations in each area and make a determination as to whether DHS has achieved the key elements of each performance expectation based on the criteria listed below: Generally achieved: Our work has shown that DHS has taken actions to satisfy most of the key elements of the performance expectation but may not have satisfied all of the elements. Generally not achieved: Our work has shown that DHS has not yet taken actions to satisfy most of the key elements of the performance expectation but may have taken steps to satisfy some of the elements. No assessment made: Neither we nor the DHS IG have completed work and/or the information DHS provided did not enable us to clearly assess DHS’s progress in achieving the performance expectation. Therefore, we have no basis for making an assessment of the extent to which DHS has taken actions to satisfy the performance expectation. An assessment of “generally achieved” indicates that DHS has taken sufficient actions to satisfy most elements of the expectation; however, an assessment of “generally achieved” does not signify that no further action is required of DHS or that functions covered by the expectation cannot be further improved or enhanced. Conversely, “generally not achieved” indicates that DHS has not yet taken actions to satisfy most elements of the performance expectation. An assessment of “generally not achieved” may be warranted even where DHS has put forth substantial effort to satisfy some but not most elements of an expectation. In cases when we or the DHS IG have not completed work upon which to base an assessment of DHS actions to satisfy a performance expectation and/or the information DHS provided did not enable us to clearly determine the extent to which DHS has achieved the performance expectation, we indicated “no assessment made.” We analyzed the extent of our work, the DHS IG’s work, and DHS’s updated information and conferred with our subject matter experts to determine whether the work and information were sufficient for a making a determination of generally achieved or generally not achieved. Between March and June 2007, we obtained updated information from DHS and met with program officials to discuss DHS’s efforts to implement actions to achieve the performance expectations in each mission and management area. We incorporated DHS’s additional information and documentation into the report and, to the extent that DHS provided documentation verifying its efforts, considered them in making our assessments of DHS’s progress. For each performance expectation, an analyst on our staff reviewed our relevant work, DHS IG reports, and updated information and documentation provided by DHS, including information received during meetings with DHS officials. On the basis of this review, the analyst made a determination that either DHS generally achieved the performance expectation or generally did not achieve the performance expectation, or the analyst identified that no determination could be made because neither we nor the DHS IG had completed work and DHS did not provide us with updated information and documentation. A second analyst then reviewed each determination to reach concurrence on the assessment for each performance expectation by reviewing the first analyst’s summary of our reports, relevant DHS IG reports, and DHS’s updated information and documentation. In cases when the first and second analyst disagreed, the two analysts reviewed and discussed the assessments and relevant documents to reach concurrence. Then, our subject matter experts reviewed the summary of our reports, relevant DHS IG reports, and DHS’s updated information and documentation to reach concurrence on the assessment for each performance expectation. To develop criteria for assessing DHS’s progress in each mission and management area, we analyzed criteria used for ratings or assessments in our prior work, in DHS IG reports, and in other reports and studies, such as those conducted by the 9-11 Commission and the Century Foundation. We also reviewed our past work in each mission and management area and obtained feedback from our subject matter experts and DHS officials on these criteria. Based on this analysis, we developed the following criteria for assessing DHS’s progress in each mission and management area: Substantial progress: DHS has taken actions to generally achieve more than 75 percent of the identified performance expectations. Moderate progress: DHS has taken actions to generally achieve more than 50 percent but 75 percent or less of the identified performance expectations. Modest progress: DHS has taken actions to generally achieve more than 25 percent but 50 percent or less of the identified performance expectations. Limited progress: DHS has taken actions to generally achieve 25 percent or less of the identified performance expectations. After making a determination as to whether DHS has generally achieved or generally not achieved the identified performance expectations, we added up the number of performance expectations that we determined DHS has generally achieved. We divided this number by the total number of performance expectations for each mission and management area, excluding those performance expectations for which we could not make an assessment. Based on the resulting percentage, we identified DHS’s overall progress in each mission and management area, as (1) substantial progress, (2) moderate progress, (3) modest progress, or (4) limited progress. Our subject matter experts reviewed the overall assessments of progress we identified for DHS in each mission and management area. Our assessments of the progress made by DHS in each mission and management area are based on the performance expectations we identified. The assessments of progress do not reflect, nor are they intended to reflect, the extent to which DHS’s actions have made the nation more secure in each area. For example, in determining that DHS has made modest progress in border security, we are not stating or implying that the border is modestly more secure than it was prior to the creation of DHS. In addition, we are not assessing DHS’s progress against a baseline in each mission and management area. We also did not consider DHS component agencies’ funding levels or the extent to which funding levels have affected the department’s ability to carry out its missions. We also did not consider the extent to which competing priorities and resource demands have affected DHS’s progress in each mission and management area relative to other areas, although competing priorities and resource demands have clearly affected DHS’s progress in specific areas. In addition, because we and the DHS IG have completed varying degrees of work (in terms of the amount and scope of reviews completed) for each mission and management area, and because different DHS components and offices provided us with different amounts and types of information, our assessments of DHS’s progress in each mission and management area reflect the information available for our review and analysis and are not necessarily equally comprehensive across all 14 mission and management areas. For example, as a result of the post-September 11, 2001, focus on aviation, we have conducted more reviews of aviation security, and our methodology identified a much larger number of related performance expectations than for the department’s progress in surface transportation security. Further, for some performance expectations, we were unable to make an assessment of DHS’s progress because (1) we had not conducted work in that area, (2) the DHS IG’s work in the area was also limited, and (3) the supplemental information provided by DHS was insufficient to form a basis for our analysis. Most notably, we were unable to make an assessment for four performance expectations in the area of immigration enforcement. This affected our overall assessment of DHS’s progress in that area as there were fewer performance expectations to tally in determining the overall level of progress. We conducted our work for this report from September 2006 through July 2007 in accordance with generally accepted government auditing standards. In July 2002, President Bush issued the National Strategy for Homeland Security. The strategy set forth overall objectives to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and assist in the recovery from attacks that may occur. The strategy set out a plan to improve homeland security through the cooperation and partnering of federal, state, local, and private sector organizations on an array of functions. The National Strategy for Homeland Security specified a number of federal departments, as well as nonfederal organizations, that have important roles in securing the homeland. In terms of federal departments, DHS was assigned a prominent role in implementing established homeland security mission areas. In November 2002, the Homeland Security Act of 2002 was enacted into law, creating DHS. This act defined the department’s missions to include preventing terrorist attacks within the United States; reducing U.S. vulnerability to terrorism; and minimizing the damages, and assisting in the recovery from, attacks that occur within the United States. The act also specified major responsibilities for the department, including to analyze information and protect infrastructure; develop countermeasures against chemical, biological, radiological, and nuclear, and other emerging terrorist threats; secure U.S. borders and transportation systems; and organize emergency preparedness and response efforts. DHS began operations in March 2003. Its establishment represented a fusion of 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. According to data provided to us by DHS, the department’s total budget authority was about $39 billion in fiscal year 2004, about $108 billion in fiscal year 2005, about $49 billion in fiscal year 2006, and about $45 billion in fiscal year 2007. The President’s fiscal year 2008 budget submission requests approximately $46 billion for DHS. Table 15 provides information on DHS’s budget authority, as reported by DHS, for each fiscal year from 2004 though 2007. Since creating and issuing its first strategic plan, the department has undergone several reorganizations. Most notably, in July 2005, DHS announced the outcome of its Second-Stage Review, an internal study of the department’s programs, policies, operations, and structures. As a result of this review, the department realigned several component agencies and functions. In particular, the Secretary of Homeland Security established a Directorate of Policy to coordinate departmentwide policies, regulations, and other initiatives and consolidated preparedness activities in one directorate, the Directorate for Preparedness. In addition, the Secretary established a new Office of Intelligence and Analysis and the Office of Infrastructure Protection composed of analysts from the former Information Analysis and Infrastructure Protection directorate. The Office of Infrastructure Protection was placed in the Directorate for Preparedness. The fiscal year 2007 DHS appropriations act provided for the further reorganization of functions within the department by, in particular, realigning DHS’s emergency preparedness and response responsibilities. In addition to these reorganizations, a variety of factors have affected DHS’s efforts to implement its mission and management functions. These factors include both domestic and international events, such as Hurricanes Katrina and Rita, and major homeland security-related legislation. Figure 2 provides a timeline of key events that have affected DHS’s implementation. Based on the performance expectations we identified, DHS has made progress in implementing its mission and management functions, but various challenges have affected its efforts. Specifically, DHS has made limited progress in the areas of emergency preparedness and response; science and technology; and human capital and information technology management. We found that DHS has made modest progress in the areas of border security; immigration services; and acquisition and financial management. We also found that DHS has made moderate progress in the areas of immigration enforcement, aviation security, surface transportation security; critical infrastructure and key resources protection, and real property management, and that DHS has made substantial progress in the area of maritime security. The United States shares a 5,525 mile border with Canada and a 1,989 mile border with Mexico, and all goods and people traveling to the United States must be inspected at air, land, or sea ports of entry. In 2006, more than 400 million legal entries were made to the United States—a majority of all border crossings were at land border ports of entry. Within DHS, CBP is the lead agency responsible for implementing the department’s border security mission. Specifically, CBP’s two priority missions are (1) detecting and preventing terrorists and terrorist weapons from entering the United States, and (2) facilitating the orderly and efficient flow of legitimate trade and travel. CBP’s supporting missions include interdicting illegal drugs and other contraband; apprehending individuals who are attempting to enter the United States illegally; inspecting inbound and outbound people, vehicles, and cargo; enforcing laws of the United States at the border; protecting U.S. agricultural and economic interests from harmful pests and diseases; regulating and facilitating international trade; collecting import duties; and enforcing U.S. trade laws. Within CBP, the United States Border Patrol is responsible for border security between designated official ports of entry, and CBP‘s Office of Field Operations enforces trade, immigration, and agricultural laws and regulations by securing the flow of people and goods into and out of the country, while facilitating legitimate travel and trade at U.S. ports of entry. As shown in table 16, we identified 12 performance expectations for DHS in the area of border security and found that overall DHS has made modest progress in meeting those expectations. Specifically, we found that DHS has generally achieved 5 of its performance expectations and has generally not achieved 7 of its performance expectations. Table 17 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of border security and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). DHS is responsible for enforcing U.S. immigration laws. Immigration enforcement includes apprehending, detaining, and removing criminal and illegal aliens; disrupting and dismantling organized smuggling of humans and contraband as well as human trafficking; investigating and prosecuting those who engage in benefit and document fraud; blocking and removing employers’ access to undocumented workers; and enforcing compliance with programs to monitor visitors. Within DHS, ICE is primarily responsible for immigration enforcement efforts. In particular, ICE’s Office of Investigations is responsible for enforcing immigration and customs laws and its Office of Detention and Removal Operations is responsible for processing, detaining, and removing aliens subject to removal from the United States. As shown in table 18, we identified 16 performance expectations for DHS in the area of immigration enforcement, and we found that overall DHS has made moderate progress in meeting those expectations. Specifically, we found that DHS has generally achieved 8 of the performance expectations and has generally not achieved 4 other performance expectations. For 4 performance expectations, we could not make an assessment. In meeting its performance expectations, ICE faced budget constraints that significantly affected its overall operations during fiscal year 2004. For example, ICE was faced with a hiring freeze in fiscal year 2004 that affected its ability to recruit, hire, and train personnel. Over the past 2 years, ICE has reported taking actions to strengthen its immigration enforcement functions and has, for example, hired and trained additional personnel to help fulfill the agency’s mission. Table 19 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of immigration enforcement and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). USCIS is the agency within DHS that is responsible for processing millions of immigration benefit applications received each year for various types of immigration benefits, determining whether applicants are eligible to receive immigration benefits, and detecting suspicious information and evidence to refer for fraud investigation and possible sanctioning by other DHS components or external agencies. USCIS processes applications for about 50 types of immigration benefits with a goal of ensuring that processing of benefits applications takes place within a 6 month time frame. USCIS has introduced new initiatives to modernize business practices and upgrade information technology infrastructure to transform its current, paper-based data systems into a digital processing resource to enhance customer service, prevent future backlogs of immigration benefit applications, and improve efficiency with expanded electronic filing. As shown in table 20, we identified 14 performance expectations for DHS in the area of immigration services and found that overall DHS has made modest progress in meeting those expectations. Specifically, we found that DHS has generally achieved 5 performance expectations and has generally not achieved 9 others. Table 21 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of immigration services and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). DHS has implemented a variety of programs to help secure the aviation sector. Within the department, TSA is the primary agency with responsibility for aviation security efforts. TSA was established in 2001 with the mission to protect the transportation network while also ensuring the free movement of people and commerce. Since its inception, TSA has focused much of its efforts on aviation security and has developed and implemented a variety of programs and procedures to secure commercial aviation. For example, TSA has undertaken efforts to strengthen airport security; provide and train a screening workforce; prescreen passengers against terrorist watch lists; and screen passengers, baggage, and cargo. TSA has implemented these efforts in part to meet numerous mandates for strengthening aviation security placed on the agency following the September 11, 2001, terrorist attacks. These mandates set priorities for the agency and guided TSA’s initial efforts to enhance aviation security. In addition to TSA, CBP, and DHS’s Science and Technology Directorate play roles in securing commercial aviation. In particular, CBP has responsibility for conducting passenger prescreening—or the matching of passenger information against terrorist watch lists—for international flights operating to or from the United States, as well as inspecting inbound air cargo upon its arrival in the United States. The Science and Technology Directorate is responsible for the research and development of aviation security technologies. As shown in table 22, we identified 24 performance expectations for DHS in the area of aviation security, and we found that overall DHS has made moderate progress in meeting those expectations. Specifically, we found that DHS has generally achieved 17 performance expectations and has generally not achieved 7 performance expectations. Table 23 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of aviation security and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). DHS has undertaken various initiatives to secure surface transportation modes, and within the department, TSA is primarily responsible for surface transportation security efforts. Since its creation following the events of September 11, 2001, TSA has focused much of its efforts and resources on meeting legislative mandates to strengthen commercial aviation security. However, TSA has more recently placed additional focus on securing surface modes of transportation, which includes establishing security standards and conducting assessments and inspections of surface transportation modes such as passenger and freight rail; mass transit; highways, including commercial vehicles; and pipelines. Although TSA has primary responsibility within the department for surface transportation security, the responsibility for securing rail and other transportation modes is shared among federal, state, and local governments and the private sector. For example, with regard to passenger rail security, in addition to TSA, DHS’s Office of Grant Programs provides grant funds to rail operators and conducts risk assessments for passenger rail agencies. Within the Department of Transportation, the Federal Transit Administration and Federal Railroad Administration have responsibilities for passenger rail safety and security. In addition, public and private passenger rail operators are also responsible for securing their rail systems. As shown in table 24, we identified five performance expectations for DHS in the area of surface transportation security, and we found that overall DHS has made moderate progress in meeting those performance expectations. Specifically, we found that DHS has generally achieved three of these performance expectations and has generally not achieved two others. Table 25 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of surface transportation security and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). DHS has undertaken various programs to secure the maritime sector. In general, these maritime security programs fall under one of three areas⎯port and vessel security, maritime intelligence, and maritime supply chain security. Within DHS, various component agencies are responsible for maritime security efforts, including the Coast Guard, CBP, TSA, and the Domestic Nuclear Detection Office. The Coast Guard is responsible for port facility inspections and has lead responsibility in coordinating maritime information sharing efforts. CBP is responsible for addressing the threat posed by terrorist smuggling of weapons in oceangoing containers. TSA is responsible for the implementation of the transportation worker identification credential program. The Domestic Nuclear Detection Office is responsible for acquiring and supporting the deployment of radiation detection equipment, including portal monitors, within the United States. As shown in table 26, we identified 23 performance expectations for DHS in the area of maritime security, and we found that overall DHS has made substantial progress in meeting those expectations. Specifically, we found that DHS has generally achieved 17 performance expectations and has generally not achieved 4 others. For 2 performance expectations, we did not make an assessment. Table 27 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of maritime security and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). Several federal legislative and executive provisions support preparation for and response to emergency situations. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act) primarily establishes the programs and processes for the federal government to provide major disaster and emergency assistance to state, local, and tribal governments; individuals; and qualified private nonprofit organizations. FEMA, within DHS, has responsibility for administering the provisions of the Stafford Act. FEMA’s emergency preparedness and response efforts include programs that prepare to minimize the damage and recover from terrorist attacks and disasters; help to plan, equip, train, and practice needed skills of first responders; and consolidate federal response plans and activities to build a national, coordinated system for incident management. DHS’s emergency preparedness and response efforts have been affected by DHS reorganizations and, in the wake of the 2005 Gulf Coast hurricanes, reassessments of some initiatives, such as the National Response Plan and its Catastrophic Incident Supplement. DHS is undergoing its second reorganization of its emergency preparedness and response programs in about 18 months. The first reorganization was initiated by the Secretary of Homeland Security in the summer of 2005 and created separate organizations within DHS responsible for preparedness and for response and recovery. The second reorganization was required by the fiscal year 2007 DHS appropriations act and largely took effect on April 1, 2007. As shown in table 28, we identified 24 performance expectations for DHS in the area of emergency preparedness and response and found that overall DHS has made limited progress in meeting those performance expectations. In particular, we found that DHS has generally achieved 5 performance expectations and has generally not achieved 18 others. For 1 performance expectation, we did not make an assessment. Table 29 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of emergency preparedness and response and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). Critical infrastructure are systems and assets, whether physical or virtual, so vital to the United States that their incapacity or destruction would have a debilitating impact on national security, national economic security, and national public health or safety, or any combination of these matters. Key resources are publicly or privately controlled resources essential to minimal operations of the economy or government, including individual targets whose destruction would not endanger vital systems but could create a local disaster or profoundly damage the nation’s morale or confidence. While the private sector owns approximately 85 percent of the nation’s critical infrastructure and key resources, DHS has wide-ranging responsibilities for leading and coordinating the overall national critical infrastructure and key resources protection effort. The National Infrastructure Protection Plan identifies 17 critical infrastructure and key resources sectors: agriculture and food; banking and finance; chemical; commercial facilities; commercial nuclear reactors, materials, and waste; dams; defense industrial base; drinking water and water treatment systems; emergency services; energy; government facilities; information technology; national monuments and icons; postal and shipping; public health and healthcare; telecommunications; and transportation systems. DHS has overall responsibility for coordinating critical infrastructure and key resources protection efforts. Within DHS, the Office of Infrastructure Protection has been designated as the Sector-Specific Agency responsible for the chemical; commercial facilities; dams; emergency services; and commercial nuclear reactors, materials, and waste sectors. TSA has been designated as the Sector-Specific Agency for postal and shipping, and TSA and the Coast Guard have been designated the Sector-Specific Agencies for transportation systems. The Federal Protective Service within ICE has been designated as the Sector-Specific Agency for government facilities. The Office of Cyber Security and Telecommunications has been designated the Sector-Specific Agency for Information Technology and Telecommunications. As shown in table 30, we identified seven performance expectations for DHS in the area of critical infrastructure and key resources protection, and we found that overall DHS has made moderate progress in meeting those performance expectations. Specifically, we found that DHS has generally achieved four performance expectations and has generally not achieved three others. Table 31 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of critical infrastructure and key resources protection and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). DHS’s Science and Technology Directorate was established to coordinate the federal government’s civilian efforts to identify and develop countermeasures to chemical, biological, radiological, nuclear, and other emerging terrorist threats to our nation. To coordinate the national effort to protect the United States from nuclear and radiological threats, in April 2005, the President directed the establishment of the Domestic Nuclear Detection Office within DHS. The new office’s mission covers a broad spectrum of responsibilities and activities, but is focused primarily on providing a single accountable organization to develop a layered defense system. This system is intended to integrate the federal government’s nuclear detection, notification, and response systems. In addition, under the directive, the Domestic Nuclear Detection Office is to acquire, develop, and support the deployment of detection equipment in the United States, as well as to coordinate the nation’s nuclear detection research and development efforts. As shown in table 32, we identified six performance expectations for DHS in the area of science and technology, and we found that overall DHS has made limited progress in meeting those performance expectations. In particular, we found that DHS has generally achieved one performance expectation and has generally not achieved five other performance expectations. Table 33 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of science and technology and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). Federal agencies use a variety of approaches and tools, including contracts, to acquire goods and services needed to fulfill or support the agencies’ missions. DHS has some of the most extensive acquisition needs within the U.S. government. In fiscal year 2004, for example, the department obligated $9.8 billion to acquire a wide range of goods and services—such as information systems, new technologies, weapons, aircraft, ships, and professional services. In fiscal year 2006, the department reported that it obligated $15.6 billion to acquire a wide range of goods and services. The DHS acquisitions portfolio is broad and complex. For example, the department has purchased increasingly sophisticated screening equipment for air passenger security; acquired technologies to secure the nation’s borders; purchased trailers to meet the housing needs of Hurricane Katrina victims; and is upgrading the Coast Guard’s offshore fleet of surface and air assets. DHS has been working to integrate the many acquisition processes and systems that the disparate agencies and organizations brought with them when they merged into DHS in 2003 while still addressing ongoing mission requirements and emergency situations, such as responding to Hurricane Katrina. As shown in table 34, we identified three performance expectations for DHS in the area of acquisition management and found that overall DHS has made modest progress in meeting those expectations. Specifically, we found that DHS has generally achieved one and not achieved two of the three performance expectations. Table 35 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of acquisition management and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). Effective financial management is a key element of financial accountability. With its establishment by the Homeland Security Act of 2002, DHS inherited a myriad of redundant financial management systems from 22 diverse agencies, along with about 100 resource management systems and 30 reportable conditions identified in prior component financial audits. Additionally, most of the 22 components that transferred to DHS had not been subjected to significant financial statement audit scrutiny prior to their transfer, so the extent to which additional significant internal control deficiencies existed was unknown. DHS’s Office of the Chief Financial Officer is responsible for functions, such as budget, finance and accounting, strategic planning and evaluation, and financial systems for the department. The Office of the Chief Financial Officer is also charged with ongoing integration of these functions within the department. For fiscal year 2006, DHS was again unable to obtain an opinion on its financial statements, and numerous material internal control weaknesses continued to be reported. DHS’s auditor had issued a disclaimer of opinion on DHS’s fiscal years 2003, 2004, and 2005 financial statements. As shown in table 36, we identified seven performance expectations for DHS in the area of financial management and found that overall DHS has made modest progress meeting those performance expectations. Specifically, we found that DHS has generally achieved two performance expectations and has generally not achieved five others. Table 37 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of financial management and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). Key human capital management areas for all agencies, including DHS, are pay, performance management, classification, labor relations, adverse actions, employee appeals, and diversity management. Congress provided DHS with significant flexibility to design a modern human capital management system. DHS and the Office of Personnel Management jointly released the final regulations on DHS’s new human capital system in February 2005. The final regulations established a new human capital system for DHS that was intended to ensure its ability to attract, retain, and reward a workforce that is able to meet its critical mission. Further, the human capital system provided for greater flexibility and accountability in the way employees are to be paid, developed, evaluated, afforded due process, and represented by labor organizations while reflecting the principles of merit and fairness embodied in the statutory merit systems principles. Although DHS intended to implement the new personnel system in the summer of 2005, court decisions enjoined the department from implementing certain labor management portions of it. Since that time, DHS has taken actions to implement its human capital system and issued its Fiscal Year 2007 and 2008 Human Capital Operational Plan in April 2007. As shown in table 38, we identified eight performance expectations for DHS in the area of human capital management and found that overall DHS has made limited progress in meeting those performance expectations. Specifically, we found that DHS has generally achieved two performance expectations and has generally not achieved six other expectations. Table 39 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of human capital management and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). DHS has undertaken efforts to establish and institutionalize the range of information technology management controls and capabilities that our research and past work have shown are fundamental to any organization’s ability to use technology effectively to transform itself and accomplish mission goals. Among these information technology management controls and capabilities are centralizing leadership for extending these disciplines throughout the organization with an empowered Chief Information Officer, having sufficient people with the right knowledge, skills, and abilities to execute each of these areas now and in the future; developing and using an enterprise architecture, or corporate blueprint, as an authoritative frame of reference to guide and constrain system investments; defining and following a corporate process for informed decision making by senior leadership about competing information technology investment options; applying system and software development and acquisition discipline and rigor when defining, designing, developing, testing, deploying, and maintaining systems; and establishing a comprehensive, departmentwide information security program to protect information and systems; Despite its efforts over the last several years, the department has significantly more to do before each of these management controls and capabilities is fully in place and is integral to how each system investment is managed. As shown in table 40, we identified 13 performance expectations for DHS in the area of information technology management and found that overall DHS has made limited progress in meeting those expectations. In particular, we found that DHS has generally achieved 2 performance expectations and has generally not achieved 8 others. For 3 other performance expectations, we did not make an assessment. Table 41 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of information technology management and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). DHS has taken actions to implement its real property management responsibilities. Key elements of real property management, as specified in Executive Order 13327, “Federal Real Property Asset Management,” include establishment of a Senior Real Property Officer, development of an asset inventory, and development and implementation of an asset management plan and performance measures. In June 2006, the Office of Management and Budget upgraded DHS’s Real Property Asset Management Score from red to yellow after DHS developed an approved Asset Management Plan, developed a generally complete real property data inventory, submitted this inventory for inclusion in the governmentwide real property inventory database, and established performance measures consistent with Federal Real Property Council standards. DHS also designated a Senior Real Property Officer as directed by Executive Order 13327. As shown in table 42, we identified nine performance expectations for DHS in the area of real property management and found that overall DHS has made moderate progress in meeting those expectations. Specifically, we found that DHS has generally achieved six of the expectations and has generally not achieved three others. Our assessments for real property management are based on a report on DHS’s real property management released in June 2007. Table 43 provides more detailed information on the progress that DHS has made in taking actions to achieve each performance expectation in the area of real property management and our assessment of whether DHS has taken steps to satisfy most of the key elements of the performance expectation (generally achieved) or has not taken steps to satisfy most of the performance expectation’s key elements (generally not achieved). Our work has identified homeland security challenges that cut across DHS’s mission and core management functions. These issues have impeded the department’s progress since its inception and will continue as DHS moves forward. While it is important that DHS continue to work to strengthen each of its mission and core management functions, it is equally important that these key issues be addressed from a comprehensive, departmentwide perspective to help ensure that the department has the structure and processes in place to effectively address the threats and vulnerabilities that face the nation. These issues include: (1) transforming and integrating DHS’s management functions; (2) establishing baseline performance goals and measures and engaging in effective strategic planning efforts; (3) applying and improving a risk management approach for implementing missions and making resource allocation decisions; (4) sharing information with key stakeholders; and (5) coordinating and partnering with federal, state, local, and private sector agencies. We have made numerous recommendations to DHS to strengthen these efforts, and the department has made progress in implementing some of these recommendations. DHS has faced a variety of difficulties in its efforts to transform into a fully functioning department, and we have designated DHS implementation and transformation as high-risk. We first designated DHS’s implementation and transformation as high-risk in 2003 because 22 disparate agencies had to transform into one department. Many of these individual agencies were facing their own management and mission challenges. But most importantly, the failure to effectively address DHS’s management challenges and program risks could have serious consequences for our homeland security as well as our economy. We kept DHS implementation and transformation on the high-risk list in 2005 because serious transformation challenges continued to hinder DHS’s success. Since then, our and the DHS IG’s reports have documented DHS’s progress and remaining challenges in transforming into an effective, integrated organization. For example, in the management area, DHS has developed a strategic plan, is working to integrate some management functions, and has continued to form necessary partnerships to achieve mission success. Despite these efforts, we reported that DHS implementation and transformation remains on the 2007 high-risk list because numerous management challenges remain, such as in the areas of acquisition, financial, human capital, and information technology management. We stated that the array of management and programmatic challenges continues to limit DHS’s ability to carry out its roles under the National Strategy for Homeland Security in an effective risk-based way. We have recommended that agencies on the high-risk produce a corrective action plan that defines the root causes of identified problems, identifies effective solutions to those problems, and provides for substantially completing corrective measures in the near term. Such a plan should include performance metrics and milestones, as well as mechanisms to monitor progress. In the spring of 2006, DHS provided us with a draft corrective action plan that did not contain key elements we have identified as necessary for an effective corrective action plan, including specific actions to address identified objectives. As of May 2007, DHS had not submitted a corrective action plan to the Office of Management and Budget. According to the Office of Management and Budget, this is one of the few high-risk areas that has not produced a final corrective action plan. Our prior work on mergers and acquisitions, undertaken before the creation of DHS, found that successful transformations of large organizations, even those faced with less strenuous reorganizations than DHS, can take at least 5 to 7 years to achieve. We reported that the creation of DHS is an enormous management challenge and that DHS faces a formidable task in its transformation efforts as it works to integrate over 170,000 federal employees from 22 component agencies. Each component agency brought differing missions, cultures, systems, and procedures that the new department had to efficiently and effectively integrate into a single, functioning unit. At the same time it weathers these growing pains, DHS must still fulfill its various homeland security and other missions. To strengthen its transformation efforts, we recommended, and DHS agreed, that it should develop an overarching management integration strategy, and provide the then DHS Business Transformation Office with the authority and responsibility to serve as a dedicated integration team and also to help develop and implement the strategy. We reported that although DHS has issued guidance and plans to assist management integration on a function by function basis, it has not developed a plan that clearly identifies the critical links that should occur across these functions, the necessary timing to make these links occur, how these interrelationships will occur, and who will drive and manage them. In addition, although DHS had established a Business Transformation Office that reported to the Under Secretary for Management to help monitor and look for interdependencies among the individual functional management integration efforts, that office was not responsible for leading and managing the coordination and integration itself. We understand that the Business Transformation Office has been recently eliminated. We have suggested that Congress should continue to monitor whether it needs to provide additional leadership authorities to the DHS Under Secretary for Management, or create a Chief Operating Officer/Chief Management Officer position which could help elevate, integrate, and institutionalize DHS’s management initiatives. The Implementing Recommendations of the 9/11 Commission Act of 2007, enacted in August 2007, designates the Under Secretary for Management as the Chief Management Officer and principal advisor on management-related matters to the Secretary. Under the Act, the Under Secretary is responsible for developing a transition and succession plan for the incoming Secretary and Under Secretary to guide the transition of management functions to a new administration. The Act further authorizes the incumbent Under Secretary as of November 8, 2008 (after the next presidential election), to remain in the position until a successor is confirmed to ensure continuity in the management functions of DHS. In addition, transparency plays an important role in helping to ensure efficient and effective transformation efforts. With regard to DHS, we have reported that DHS has not made its management or operational decisions transparent enough so that Congress can be sure it is effectively, efficiently, and economically using the billions of dollars in funding it receives annually. More specifically, in April 2007, we testified that we have encountered access issues in numerous engagements, and the lengths of delay have been both varied and significant and have affected our ability to do our work in a timely manner. We reported that we have experienced delays with DHS components that include CBP, ICE, FEMA, and TSA on different types of work such as information sharing, immigration, emergency preparedness in primary and secondary schools, and accounting systems. The Secretary of DHS and the Under Secretary for Management have stated their desire to work with us to resolve access issues and to provide greater transparency. It will be important for DHS to become more transparent and minimize recurring delays in providing access to information on its programs and operations so that Congress, GAO, and others can independently assess its efforts. DHS has not always implemented effective strategic planning efforts and has not yet fully developed performance measures or put into place structures to help ensure that the agency is managing for results. We have identified strategic planning as one of the critical success factors for new organizations. This is particularly true for DHS, given the breadth of its responsibility and need to clearly identify how stakeholders’ responsibilities and activities align to address homeland security efforts. The Government Performance and Results Act (GPRA) of 1993 requires that federal agencies consult with the Congress and key stakeholders to assess their missions, long-term goals, strategies, and resources needed to achieve their goals. It also requires that the agency include six key components in its strategic plan: (1) a mission statement; (2) long-term goals and objectives; (3) approaches (or strategies) to achieve the goals and objectives; (4) a description of the relationship between annual and long-term performance goals; (5) key factors that could significantly affect achievement of the strategic goals; and (6) a description of how program evaluations were used to establish or revise strategic goals. Other best practices in strategic planning and results management that we have identified include involving stakeholders in the strategic planning process, continuously monitoring internal and external environments to anticipate future challenges and avoid potential crises, holding managers accountable for the results of their programs, and aligning program performance measures and individual performance expectations at each organizational level with agencywide goals and objectives. DHS issued a departmentwide strategic plan in 2004 that addressed five of six GPRA-required elements. The plan included a mission statement, long- term goals, strategies to achieve the goals, key external factors, and program evaluations, but did not describe the relationship between annual and long-term goals. The linkage between annual and long-term goals is important for determining whether an agency has a clear sense of how it will assess progress toward achieving the intended results of its long-term goals. While DHS’s Performance Budget Overview and other documents include a description of the relationship between annual and long-term goals, not including this in the strategic plan made it more difficult for DHS officials and stakeholders to identify how their roles and responsibilities contributed to DHS’s mission. In addition, although DHS’s planning documents described programs requiring stakeholder coordination to effectively implement them, stakeholder involvement in the planning process itself was limited. Given the many other organizations at all levels of government and in the private sector whose involvement is key to meeting homeland security goals, earlier and more comprehensive stakeholder involvement in the planning process is essential to the success of DHS’s planning efforts. Such involvement is important to ensure that stakeholders help identify and agree on how their daily operations and activities contribute to fulfilling DHS’s mission. To make DHS a more results-oriented agency, we recommended that DHS’s strategic planning process include direct consultation with external stakeholders, that its next strategic plan include a description of the relationship between annual performance goals and long-term goals, and that the next strategic plan adopt additional good strategic planning practices, such as ensuring that the strategic plan includes a timeline for achieving long-terms goals and a description of the specific budgetary, human capital, and other resources needed to achieve those goals. According to DHS officials, the department is planning to issue an updated strategic plan, but they did not provide a target time frame for when the plan would be issued. We have also reported on the importance of the development of outcome- based performance goals and measures as part of strategic planning and results management efforts. Performance goals and measures are intended to provide Congress and agency management with information to systematically assess a program’s strengths, weaknesses, and performance. A performance goal is the target level of performance expressed as a tangible, measurable objective against which actual achievement will be compared. A performance measure can be defined as an indicator, statistic, or metric used to gauge program performance. Outcome-oriented measures show results or outcomes related to an initiative or program in terms of its effectiveness, efficiency, or impact. A number of DHS’s programs lack outcome goals and measures, which may hinder the department’s ability to effectively assess the results of program efforts or fully assess whether the department is using resources effectively and efficiently, especially given various agency priorities for resources. In particular, we have reported that some of DHS’s components have not developed adequate outcome-based performance measures or comprehensive plans to monitor, assess, and independently evaluate the effectiveness of their plans and performance. For example, in August 2005 we reported that ICE lacked outcome goals and measures for its worksite enforcement program and recommended that the agency set specific time frames for developing these goals and measures. In March 2006, we reported that USCIS had not yet established performance goals and measures to assess its benefit fraud activities, and we recommended that they do so. Further, we have also reported that many of DHS’s border- related performance goals and measures are not fully defined or adequately aligned with one another, and some performance targets are not realistic. Yet, we have also recognized that DHS faces some inherent difficulties in developing performance goals and measures to address its unique mission and programs, such as in developing measures for the effectiveness of its efforts to prevent and deter terrorist attacks. DHS has not fully adopted and applied a risk management approach in implementing its mission and core management functions. Risk management has been widely supported by the President and Congress as a management approach for homeland security, and the Secretary of Homeland Security has made it the centerpiece of departmental policy. We have previously reported that defining an acceptable, achievable (within constrained budgets) level of risk is an imperative to address current and future threats. Many have pointed out, as did the Gilmore and 9/11 Commissions, that the nation will never be completely safe and total security is an unachievable goal. Within its sphere of responsibility, DHS cannot afford to protect everything against all possible threats. As a result, DHS must make choices about how to allocate its scarce resources to most effectively manage risk. A risk management approach can help DHS make decisions systematically and is consistent with the National Strategy for Homeland Security and DHS’s strategic plan, which have called for the use of risk-based decisions to prioritize DHS’s resource investments regarding homeland security related programs. Several DHS component agencies have taken steps toward integrating risk-based decision making into their decision making processes. For example, the Coast Guard has taken actions to mitigate vulnerabilities and enhance maritime security. Security plans for seaports, facilities, and vessels have been developed based on assessments that identify their vulnerabilities. In addition, the Coast Guard used a Maritime Security Risk Assessment Model to prioritize risk according to a combination of possible threat, consequence, and vulnerability scenarios. Under this approach, seaport infrastructure that was determined to be both a critical asset and a likely and vulnerable target would be a high priority for funding security enhancements. By comparison, infrastructure that was vulnerable to attack but not as critical or infrastructure that was very critical but already well protected would be lower in priority. In the transportation area, TSA has incorporated risk-based decision-making into number of its programs and processes. For example, TSA has started to incorporate risk management principles into securing air cargo, but has not conducted assessments of air cargo vulnerabilities or critical assets (cargo facilities and aircraft)—two crucial elements of a risk-based management approach without which TSA may not be able to appropriately focus its resources on the most critical security needs. TSA also completed an Air Cargo Strategic Plan in November 2003 that outlined a threat-based risk management approach to securing the nation’s air cargo transportation system. However, TSA’s existing tools for assessing vulnerability have not been adapted for use in conducting air cargo assessments, nor has TSA established a schedule for when these tools would be ready for use. Although some DHS components have taken steps to apply risk-based decision making in implementing their mission functions, we also found that other components have not always utilized such an approach. DHS has not performed comprehensive risk assessments in transportation, critical infrastructure, and the immigration and customs systems to guide resource allocation decisions. For example, DHS has not fully utilized a risk-based strategy to allocate resources among transportation sectors. Although TSA has developed tools and processes to assess risk within and across transportation modes, it has not fully implemented these efforts to drive resource allocation decisions. We also recently identified concerns about DHS’s use of risk management in distributing grants to states and localities. For fiscal years 2006 and 2007, DHS has used risk assessments to identify urban areas that faced the greatest potential risk, and were therefore eligible to apply for the Urban Areas Security Initiative grant, and based the amount of awards to all eligible areas primarily on the outcomes of the risk assessment and a new effectiveness assessment. Starting in fiscal year 2006, DHS made several changes to the grant allocation process, including modifying its risk assessment methodology, and introducing an assessment of the anticipated effectiveness of investments. DHS combined the outcomes of these two assessments to make funding decisions. However, we found that DHS had limited knowledge of how changes to its risk assessment methods, such as adding asset types and using additional or different data sources, affect its risk estimates. As a result, DHS had a limited understanding of the effects of the judgments made in estimating risk that influenced eligibility and allocation outcomes for fiscal year 2006. DHS leadership could make more informed policy decisions if it were provided with alternative risk estimates and funding allocations resulting from analyses of varying data, judgments, and assumptions. We also reported that DHS has not applied a risk management approach in deciding whether and how to invest in specific capabilities for a catastrophic threat, and we recommended that it do so. In April 2007, DHS established the new Office of Risk Management and Analysis to serve as the DHS Executive Agent for national-level risk management analysis standards and metrics; develop a standardized approach to risk; develop an approach to risk management to help DHS leverage and integrate risk expertise across components and external stakeholders; assess DHS risk performance to ensure programs are measurably reducing risk; and communicating DHS risk management in a manner that reinforces the risk-based approach. According to DHS, the office’s activities are intended to develop a risk architecture, with standardized methodologies for risk analysis and management, to assist in the prioritization of risk reduction programs and to ensure that DHS component risk programs are synchronized, integrated, and use a common approach. Although this new office should help to coordinate risk management planning and activities across the department, it is too early to tell what effect this office will have on strengthening departmentwide risk management activities. The federal government, including DHS, has made progress in developing a framework to support a more unified effort to secure the homeland, including information sharing. However, opportunities exist to enhance the effectiveness of information sharing among federal agencies and with state and local governments and private sector entities. As we reported in August 2003, efforts to improve intelligence and information sharing needed to be strengthened. In 2005, we designated information sharing for homeland security as high-risk. We recently reported that the nation still lacked an implemented set of governmentwide policies and processes for sharing terrorism information, but has issued a strategy on how it will put in place the overall framework, policies, and architecture for sharing with all critical partners—actions that we and others have recommended. The Intelligence Reform and Terrorism Prevention Act of 2004 required that the President create an “information sharing environment” to facilitate the sharing of terrorism information, yet this environment remains in the planning stage. An implementation plan for the environment, which was released on November 16, 2006, defines key tasks and milestones for developing the information sharing environment, including identifying barriers and ways to resolve them, as we recommended. We noted that completing the information sharing environment is a complex task that will take multiple years and long-term administration and congressional support and oversight, and will pose cultural, operational, and technical challenges that will require a collaborated response. DHS has taken some steps to implement its information sharing responsibilities. For example, DHS implemented a system to share homeland security information. States and localities are also creating their own information “fusion” centers, some with DHS support. DHS has further implemented a program to protect sensitive information the private sector provides it on security at critical infrastructure assets, such as nuclear and chemical facilities. However, the DHS IG found that users of the information system were confused with it and as a result did not regularly use it; and DHS had not secured of the private sector’s trust that the agency could adequately protect and effectively use the information that sector provided. These challenges will require longer-term actions to resolve. Our past work in the information sharing and warning areas has highlighted a number of other challenges that need to be addressed. These challenges include developing productive information sharing relationships among the federal government, state and local governments, and the private sector; and ensuring that the private sector receives better information on potential threats. In addition to providing federal leadership with respect to homeland security, DHS also plays a large role in coordinating the activities of other federal, state, local, private sector, and international stakeholders, but has faced challenges in this regard. To secure the nation, DHS must form effective and sustained partnerships between legacy component agencies and also with a range of other entities, including other federal agencies, state and local governments, the private and nonprofit sectors, and international partners. We have reported that successful partnering and coordination involves collaborating and consulting with stakeholders to develop and agree on goals, strategies, and roles to achieve a common purpose; identify resource needs; establish a means to operate across agency boundaries, such as compatible procedures, measures, data, and systems; and agree upon and document mechanisms to monitor, evaluate, and report to the public on the results of joint efforts. We have found that the appropriate homeland security roles and responsibilities within and between the levels of government and with the private sector are evolving and need to be clarified. The implementation of the National Strategy for Homeland Security further underscores the importance for DHS of partnering and coordination. For example, 33 of the strategy’s 43 initiatives are required to be implemented by 3 or more federal agencies and the National Strategy identifies the private sector as a key homeland security partner. If these entities do not effectively coordinate their implementation activities, they may waste resources by creating ineffective and incompatible pieces of a larger security program. For example, because the private sector owns or operates 85 percent of the nation’s critical infrastructure, DHS must partner with individual companies and sector organizations in order to protect vital national infrastructure, such as the nation’s water supply, transportation systems and chemical facilities. In October 2006 we reported that all 17 critical infrastructure sectors established their respective government councils, and nearly all sectors initiated their voluntary private sector councils in response to the National Infrastructure Protection Plan. The councils, among other things, are to identify their most critical assets, assess the risks they face, and identify protective measures, in sector-specific plans that comply with DHS’s National Infrastructure Protection Plan. DHS has taken other important actions in developing partnerships and mechanisms for coordinating with homeland security partners. For example, DHS formed the National Cyber Response Coordination Group to coordinate the federal response to cyber incidents of national significance. It is a forum of national security, law enforcement, defense, intelligence, and other government agencies that coordinates intragovernmental and public/private preparedness and response to and recovery from national level cyber incidents and physical attacks that have significant cyber consequences. In the area of maritime security, DHS has also taken actions to partner with a variety of stakeholders. For example, the Coast Guard reported to us that as of June 2006, 35 sector command centers had been created and that these centers were the primary conduit for daily collaboration and coordination between the Coast Guard and its port partner agencies. We also found that through its Customs-Trade Partnership Against Terrorism Program, CBP has worked in partnership with private companies to review their supply chain security plans to improve members’ overall security. However, DHS has faced some challenges in developing other effective partnerships and in clarifying the roles and responsibilities of various homeland security stakeholders. For example, in February 2007 we testified that because DHS has only limited authority to address security at chemical facilities it must continue to work with the chemical industry to ensure that it is assessing vulnerabilities and implementing security measures. Also, while TSA has taken steps to collaborate with federal and private sector stakeholders in the implementation of its Secure Flight program, in 2006 we reported these stakeholders stated that TSA has not provided them with the information they would need to support TSA’s efforts as they move forward with the program. In addition, we reported in September 2005 that TSA did not effectively involve private sector stakeholders in its decision making process for developing security standards for passenger rail assets We recommended, and DHS developed, security standards that reflected industry best practices and could be measured, monitored, and enforced by TSA rail inspectors and, if appropriate, by rail asset owners. We have also made other recommendations to DHS to help strengthen its partnership efforts in the areas of transportation security and research and development. Further, lack of clarity regarding roles and responsibilities caused DHS difficulties in coordinating with its emergency preparedness and response partners in responding to Hurricanes Katrina and Rita. For example, the Red Cross and FEMA had differing views about their roles and responsibilities under the National Response Plan, which hampered efforts to coordinate federal mass care assistance. Department of Labor and FEMA officials also disagreed about which agency was responsible for ensuring the safety and health of response and recovery workers. This lack of clarity about each other’s roles and procedures resulted in delayed implementation of the National Response Plan’s Worker Safety and Health Support Annex. We recommended that DHS take steps to improve partnering and coordination efforts as they relate to emergency preparedness and response, including to seek input from the state and local governments and private sector entities, such as the Red Cross, on the development and implementation of key capabilities, including those for interoperable communications. Given the dominant role that DHS plays in securing the homeland, it is critical that the department’s mission programs and management systems and functions operate as efficiently and effectively as possible. In the more than 4 years since its establishment, the department has taken important actions to secure the border and the transportation sector and to defend against, prepare for, and respond to threats and disasters. DHS has had to undertake these critical missions while also working to transform itself into a fully functioning cabinet department—a difficult undertaking for any organization and one that can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. At the same time, a variety of factors, including Hurricanes Katrina and Rita, threats to and attacks on transportation systems in other countries, and new responsibilities and authorities provided by Congress have forced the department to reassess its priorities and reallocate resources to address key domestic and international events and to respond to emerging issues and threats. As it moves forward, DHS will continue to face the challenges that have affected its operations thus far, including transforming into a high- performing, results-oriented agency; developing results-oriented goals and measures to effectively assess performance; developing and implementing a risk-based approach to guide resource decisions; and establishing effective frameworks and mechanisms for sharing information and coordinating with homeland security partners. DHS has undertaken efforts to address these challenges but will need to give continued attention to these efforts in order to efficiently and effectively identify and prioritize mission and management needs, implement efforts to address those needs, and allocate resources accordingly. Efforts to address these challenges will be especially important over the next several years given the threat environment and long-term fiscal imbalance facing the nation. To address these challenges, DHS will need to continue its efforts to develop a results-oriented mission and management framework to guide implementation efforts and progress toward achieving desired outcomes. In moving forward, it will also be important for DHS to routinely reassess its mission and management goals, measures, and milestones to evaluate progress made, identify past and emerging obstacles, and examine alternatives to address those obstacles and effectively implement its missions. We have made nearly 700 recommendations to DHS on initiatives and reforms that would enhance its ability to implement its core mission and management functions, including developing performance goals and measures and setting milestones for key programs, making resource allocation decisions based on risk assessments, and developing and implementing internal controls to help ensure program effectiveness. DHS has generally agreed with our prior recommendations. Moreover, taking those actions that we have suggested for agencies on our high-risk list provides a good road map for DHS as it works to further develop management structures that, once in place, could help the department more efficiently and effectively implement its mission and management functions. To be removed from our high-risk list, agencies first have to produce a corrective action plan that defines the root causes of identified problems, identifies effective solutions to those problems, and provides for substantially completing corrective measures in the near term. Such a plan should include performance metrics and milestones, as well as mechanisms to monitor progress. In the spring of 2006, DHS provided us with a draft corrective action plan that did not contain key elements we have identified as necessary for an effective corrective action plan, including specific actions to address identified objectives, and this plan has not yet been approved by the Office of Management and Budget. Second, agencies must demonstrate significant progress in addressing the problems identified in their corrective action plans. It will be important for DHS to become more transparent and minimize recurring delays in providing access to information on its programs and operations so that Congress, GAO, and others can independently assess its efforts. Finally, agencies, in particular top leadership, must demonstrate a commitment to sustain initial improvements in their performance over the long term. Although DHS leaders have expressed their intent to integrate legacy agencies into the new department, they have not dedicated the resources needed to oversee this effort. A well-managed, high-performing Department of Homeland Security is essential to meeting the significant homeland security challenges facing the nation. As DHS continues to evolve, implement its programs, and integrate its functions, we will continue to review its progress and performance and provide information to Congress and the public on its efforts. We requested comments on this report from the Secretary of Homeland Security. In comments dated July 20, 2007, and signed by the Undersecretary for Management (reprinted in their entirety in appendix II), DHS took issues with our methodology and disagreed with the conclusions we reached for 42 of the 171 performance expectations (specifically 41 of the 84 performance expectations where we assessed DHS as not having achieved the expectation and 1 of the 9 performance expectations for which we did not make an assessment). DHS also provided technical comments, which we considered and incorporated where appropriate. DHS raised five general issues with our methodology. First, DHS believes that we altered the criteria by which we would judge the department’s progress in changing our terminology from “generally addressed” to “generally achieved.” As we communicated to DHS, we did not change the underlying assessment approach or evaluation criteria. Rather, we changed the way that we characterized DHS’s progress for each performance expectation. For example, our definition for “generally addressed” and “generally achieved” did not change: “Our work has shown that DHS has taken steps to effectively satisfy the key elements of the performance expectation but may not have satisfied all of the elements.” The change from “addressed” to “achieved” was not a change in methodology, criteria, or standards but only a change in language to better convey, in the context of results-oriented government, the legislative and executive intent behind these performance expectations that DHS achieve these expectations rather than merely begin to take steps that apply or are relevant to them. Second, DHS took issue with the binary standard we used to assess each performance expectation. While we acknowledge the binary standard we applied is not perfect, we believe it is appropriate for this review because the administration generally has not established quantitative goals and measures for the performance expectations in connection with the various mission and management areas. Thus, we could not assess where along a spectrum of progress DHS stood for individual performance expectations. We chose the 2-step process for assessing DHS’s progress—using a binary standard for individual performance expectations and a spectrum for broad mission and management areas—and fully disclosed it to and discussed it with DHS officials at the outset and throughout the review. Third, DHS was concerned about how we defined our criteria for assessing DHS’s progress in achieving each performance expectation and an apparent shift of criteria we applied after the department supplied us additional information and documents. With regard to how we defined our criteria and the performance expectations, the key elements for the expectations were inherent to each one, and we discussed these elements in each assessment. Further, we did not shift our criteria. Rather we employed a process by which we disclosed our preliminary analysis and assessments to DHS, received and analyzed additional documents and statements from DHS officials, and updated (and in many cases changed) our preliminary assessments based on the additional inputs. This process resulted in an improvement, a diminution, or no change in our assessment of the applicable area. In some cases, we added language to clarify the basis of our assessment after our review of the additional information DHS provided. Fourth, DHS raised concerns that we did not “normalize” the application of our criteria by the many GAO analysts who had input to this review. Our methodology involved significant input by these analysts because they have had experience with the mission and management areas we were evaluating and were knowledgeable about the programs, specific performance expectations, activities, data, and results from each area. A core team of GAO analysts and managers reviewed all the inputs from these other GAO staff to ensure the consistent application of our methodology, criteria, and analytical process. In addition, our quality control process included detailed reviews of the facts included in this report, as well as assurance that we followed GAO’s policies and generally accepted government auditing standards. Finally, DHS points out that we treated all performance expectations as if they were of equal significance. In our scope and methodology section we recognize that qualitative differences between the performance expectations exist, but we did not apply a weight to the performance expectations because congressional, departmental, and other stakeholders’ views on the relative priority of each performance expectation may be different and we did not believe it was appropriate to substitute our judgment for theirs. DHS disagreed with our assessment of 42 of the 171 performance expectations—including 41 of the 84 performance expectations we assessed as generally not achieved—contending that we did not fully take account of all the actions it has taken relative to each expectation. Specifically, DHS believes that we expected DHS to achieve an entire expectation in cases where both DHS and we agree that ultimate achievement will not be possible for several more years, such as in the areas of border security and science and technology. This report provides Congress and the public with an assessment of DHS’s progress as of July 2007 and does not reflect the extent to which DHS should have or could have made more progress. We believe that it is appropriate, after pointing out the expectation for a multiyear program and documenting the activities DHS has actually accomplished to date, to reach a conclusion about whether DHS had not implemented the program after 4 years. DHS’s concern that we have not adequately used or interpreted additional information it provided us, such as for performance expectations in the areas of aviation security and emergency preparedness and response, has little basis. We fully considered all information and documents DHS provided and described how we applied this information in the assessment portion of each performance expectation. In some cases DHS only provided us with testimonial information regarding its actions to achieve each performance expectation, but did not provide us with documentation verifying these actions. In the absence of such documentation to support DHS’s claims, we concluded that DHS had generally not achieved the expectations. In other cases, the information and documents DHS provided did not convince us that DHS had generally achieved the performance expectation as stated or as we had interpreted it. In these cases, we explain the basis for our conclusions in the “GAO Assessment sections”.Further, in some cases the information and documents DHS provided were not relevant to the specific performance expectation; in these situations we did not discuss them in our assessment. In addition, in some of its comments on individual performance expectations, DHS referenced new information that it did not provide to us during our review. In these cases we either explain our views on the information, or in one case we have changed our conclusion to “no assessment made”. Overall, we appreciate DHS’s concerns and recognize that in a broad- based endeavor such as this, some level of disagreement is inevitable, especially at any given point in time. However, we have been as transparent as possible regarding our purpose, methodology, and professional judgments. In table 44, we have summarized DHS’s comments on the 42 performance expectations and our response to those comments. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Secretary of Homeland Security, the Director of the Office of Management and Budget, and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact me at (202) 512-8777, or rabkinn@gao.gov. Contact points for each mission and management area are listed in appendix I. Contact points for our Offices of Congressional Relations and Public Affairs may be found on this last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the person named above, Christopher Keisling, Assistant Director; Jason Barnosky; Cathleen A. Berrick; Sharon Caudle; Virginia Chanley; Michele Fejfar; Rebecca Gambler; Kathryn Godfrey; Stephanie Hockman; Tracey King; Thomas Lombardi; Jan Montgomery; Octavia Parks; and Sue Ramanathan made key contributions to this report. Other contributors to this report included Eugene Aloise; John Bagnulo; Mark Bird; Nancy Briggs; Kristy Brown; Stephen Caldwell; Frances Cook; Stephen Donahue; Jeanette Espinola; Jess Ford; Amanda Gill; Mark Goldstein; Ellen Grady; Samuel Hinojosa; Randolph Hite; Daniel Hoy; John Hutton; William O. Jenkins, Jr.; Casey Keplinger; Kirk Kiester; Eileen Larence; Leena Mathew; Kieran McCarthy; Tiffany Mostert; Shannin O’Neill; Bonita Oden; David Powner; Jerry Seigler; Katherine Siggerud; Richard Stana; Bernice Steinhardt; John Stephenson; Sarah Veale; John Vocino; Gregory Wilshusen; Eugene Wisnoksi; and William T. Woods. Homeland Security: Prospects For Biometric US-VISIT Exit Capability Remain Unclear. GAO-07-1044T. Washington, D.C.: June 28, 2007. Border Patrol: Costs and Challenges Related to Training New Agents. GAO-07-997T. Washington, D.C.: June 19, 2007. Homeland Security: Information on Training New Border Patrol Agents. GAO-07-540R. Washington, D.C.: March 30, 2007. Homeland Security: US-VISIT Program Faces Operational, Technological, and Management Challenges. GAO-07-632T. Washington, D.C.: March 20, 2007. Secure Border Initiative: SBInet Planning and Management Improvements Needed to Control Risks. GAO-07-504T. Washington, D.C.: February 27, 2007. Homeland Security: US-VISIT Has Not Fully Met Expectations and Longstanding Program Management Challenges Need to Be Addressed. GAO-07-499T. Washington, D.C.: February 16, 2007. Secure Border Initiative: SBInet Expenditure Plan Needs to Better Support Oversight and Accountability. GAO-07-309. Washington, D.C.: February 15, 2007. Homeland Security: Planned Expenditures for U.S. Visitor and Immigrant Status Program Need to Be Adequately Defined and Justified. GAO-07-278. Washington, D.C.: February 14, 2007. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-378T. Washington, D.C.: January 31, 2007. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-248. Washington, D.C.: December 6, 2006. Department of Homeland Security and Department of State: Documents Required for Travelers Departing From or Arriving in the United States at Air Ports-of-Entry From Within the Western Hemisphere. GAO-07-250R. Washington, DC: December 6, 2006. Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-1090T. Washington, D.C.: September 7, 2006. Illegal Immigration: Border-Crossing Deaths Have Doubled Since 1995; Border Patrol’s Efforts to Prevent Deaths Have Not Been Fully Evaluated. GAO-06-770. Washington, D.C.: August 15, 2006. Border Security: Continued Weaknesses in Screening Entrants into the United States. GAO-06-976T. Washington, D.C.: August 2, 2006. Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-854. Washington, D.C.: July 28, 2006. Process for Admitting Additional Countries into the Visa Waiver Program. GAO-06-835R. Washington, D.C.: July 28, 2006. Intellectual Property: Initial Observations on the STOP Initiative and U.S. Border Efforts to Reduce Piracy. GAO-06-1004T. Washington, D.C.: July 26, 2006. Border Security: Investigators Transported Radioactive Sources Across Our Nation’s Borders at Two Locations. GAO-06-940T. Washington, D.C.: July 7, 2006. Border Security: Investigators Transported Radioactive Sources Across Our Nation’s Borders at Two Locations. GAO-06-939T. Washington, D.C.: July 5, 2006. Information on Immigration Enforcement and Supervisory Promotions in the Department of Homeland Security’s Immigration and Customs Enforcement and Customs and Border Protection. GAO-06-751R. Washington, D.C.: June 13, 2006. Homeland Security: Contract Management and Oversight for Visitor and Immigrant Status Program Need to Be Strengthened. GAO-06-404. Washington, D.C.: June 9, 2006. Observations on Efforts to Implement the Western Hemisphere Travel Initiative on the U.S. Border with Canada. GAO-06-741R. Washington, D.C.: May 25, 2006. Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease. GAO-06-644. Washington, D.C.: May 19, 2006. Border Security: Reassessment of Consular Resource Requirements Could Help Address Visa Delays. GAO-06-542T. Washington, D.C.: April 4, 2006. Border Security: Investigators Transported Radioactive Sources Across Our Nation’s Borders at Two Locations. GAO-06-583T. Washington, D.C.: March 28, 2006. Border Security: Investigators Successfully Transported Radioactive Sources Across Our Nation’s Borders at Selected Locations. GAO-06-545R. Washington, D.C.: March 28, 2006. Homeland Security: Better Management Practices Could Enhance DHS’s Ability to Allocate Investigative Resources. GAO-06-462T. Washington, D.C.: March 28, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problems Challenge U.S. Efforts to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Border Security: Key Unresolved Issues Justify Reevaluation of Border Surveillance Technology Program. GAO-06-295. Washington, D.C.: February 22, 2006. Homeland Security: Recommendations to Improve Management of Key Border Security Program Need to Be Implemented. GAO-06-296. Washington, D.C.: February 14, 2006. Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed. GAO-06-318T. Washington, D.C.: January 25, 2006. Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing. GAO-05-859. Washington, D.C.: September 13, 2005. Border Security: Opportunities to Increase Coordination of Air and Marine Assets. GAO-05-543. Washington, D.C.: August 12, 2005. Border Security: Actions Needed to Strengthen Management of Department of Homeland Security’s Visa Security Program. GAO-05-801. Washington, D.C.: July 29, 2005. Border Patrol: Available Data on Interior Checkpoints Suggest Differences in Sector Performance. GAO-05-435. Washington, D.C.: July 22, 2005. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Homeland Security: Performance of Foreign Student and Exchange Visitor Information System Continues to Improve, But Issues Remain. GAO-05-440T. Washington, D.C.: March 17, 2005. Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program. GAO-05-202. Washington, D.C.: February 23, 2005. Border Security: Streamlined Visas Mantis Program Has Lowered Burden on Foreign Science Students and Scholars, but Further Refinements Needed. GAO-05-198. Washington, D.C.: February 18, 2005. Border Security: Joint, Coordinated Actions by State and DHS Needed to Guide Biometric Visas and Related Programs. GAO-04-1080T. Washington, D.C.: September 9, 2004. Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. Washington, D.C.: September 9, 2004. Border Security: Consular Identification Cards Accepted within United States, but Consistent Federal Guidance Needed. GAO-04-881. Washington, D.C.: August 24, 2004. Border Security: Additional Actions Needed to Eliminate Weaknesses in the Visa Revocation Process. GAO-04-795. Washington, D.C.: July 13, 2004. Border Security: Additional Actions Needed to Eliminate Weaknesses in the Visa Revocation Process. GAO-04-899T. Washington, D.C.: July 13, 2004. Border Security: Agencies Need to Better Coordinate Their Strategies and Operations on Federal Lands. GAO-04-590. Washington, D.C.: June 16, 2004. Overstay Tracking: A Key Component of Homeland Security and a Layered Defense. GAO-04-82. Washington, D.C.: May 21, 2004. Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed. GAO-04-586. Washington, D.C.: May 11, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-04-569T. Washington, D.C.: March 18, 2004. Border Security: Improvements Needed to Reduce Time Taken to Adjudicate Visas for Science Students and Scholars. GAO-04-443T. Washington, D.C.: February 25, 2004. Border Security: Improvements Needed to Reduce Time Taken to Adjudicate Visas for Science Students and Scholars. GAO-04-371. Washington, D.C.: February 25, 2004. Homeland Security: Overstay Tracking Is a Key Component of a Layered Defense. GAO-04-170T. Washington, D.C.: October 16, 2003. Security: Counterfeit Identification Raises Homeland Security Concerns. GAO-04-133T. Washington, D.C.: October 1, 2003. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. Washington, D.C.: September 19, 2003. Security: Counterfeit Identification and Identification Fraud Raise Security Concerns. GAO-03-1147T. Washington, D.C.: September 9, 2003. Land Border Ports of Entry: Vulnerabilities and Inefficiencies in the Inspections Process. GAO-03-1084R. Washington, D.C.: August 18, 2003. Federal Law Enforcement Training Center: Capacity Planning and Management Oversight Need Improvement. GAO-03-736. Washington, D.C.: July 24, 2003. Border Security: New Policies and Increased Interagency Coordination Needed to Improve Visa Process. GAO-03-1013T. Washington, D.C.: July 15, 2003. Border Security: New Policies and Procedures Are Needed to Fill Gaps in the Visa Revocation Process. GAO-03-908T. Washington, D.C.: June 18, 2003. Border Security: New Policies and Procedures Are Needed to Fill Gaps in the Visa Revocation Process. GAO-03-798. Washington, D.C.: June 18, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Counterfeit Documents Used to Enter the United States From Certain Western Hemisphere Countries Not Detected. GAO-03-713T. Washington, D.C.: May 13, 2003. Information Technology: Terrorist Watch Lists Should Be Consolidated to Promote Better Integration and Sharing. GAO-03-322. Washington, D.C.: April 15, 2003. Border Security: Challenges in Implementing Border Technology. GAO-03-546T. Washington, D.C.: March 12, 2003. Alien Detention Standards: Telephone Access Problems Were Pervasive at Detention Facilities; Other Deficiencies Did Not Show a Pattern of Noncompliance. GAO-07-875. Washington, D.C.: July 6, 2007. Employment Verification: Challenges Exist in Implementing a Mandatory Electronic Verification System. GAO-07-924T. Washington, D.C.: June 7, 2007. Foreign Workers: Information on Selected Countries’ Experiences. GAO-06-1055. Washington, D.C.: September 8, 2006. Information Technology: Immigration and Customs Enforcement Is Beginning to Address Infrastructure Modernization Program Weaknesses, but Key Improvements Still Needed. GAO-06-823. Washington, D.C.: July 27, 2006. Immigration Enforcement: Benefits and Limitations to Using Earnings Data to Identify Unauthorized Work. GAO-06-814R. Washington, D.C.: July 11, 2006. Immigration Enforcement: Weaknesses Hinder Employment Verification and Worksite Enforcement Efforts. GAO-06-895T. Washington, D.C.: June 19, 2006. Information on Immigration Enforcement and Supervisory Promotions in the Department of Homeland Security’s Immigration and Customs Enforcement and Customs and Border Protection. GAO-06-751R. Washington, D.C.: June 13, 2006. Homeland Security: Better Management Practices Could Enhance DHS’s Ability to Allocate Investigative Resources. GAO-06-462T. Washington, D.C.: March 28, 2006. Information Technology: Management Improvements Needed on Immigration and Customs Enforcement’s Infrastructure Modernization Program. GAO-05-805. Washington, D.C.: September 7, 2005. Immigration Enforcement: Weaknesses Hinder Employment Verification and Worksite Enforcement Efforts. GAO-05-813. Washington, D.C.: August 31, 2005. Combating Alien Smuggling: The Federal Response Can Be Improved. GAO-05-892T. Washington, D.C.: July 12, 2005. Combating Alien Smuggling: Opportunities Exist to Improve the Federal Response. GAO-05-305. Washington, D.C.: May 27, 2005. Information on Certain Illegal Aliens Arrested in the United States. GAO-05-646R. Washington, D.C.: May 9, 2005. Department of Homeland Security: Addressing Management Challenges that Face Immigration Enforcement Agencies. GAO-05-664T. Washington, D.C.: May 5, 2005. Information on Criminal Aliens Incarcerated in Federal and State Prisons and Local Jails. GAO-05-337R. Washington, D.C.: April 7, 2005. Homeland Security: Performance of Foreign Student and Exchange Visitor Information System Continues to Improve, But Issues Remain. GAO-05-440T. Washington, D.C.: March 17, 2005. Alien Registration: Usefulness of a Nonimmigrant Alien Annual Address Reporting Requirement Is Questionable. GAO-05-204. Washington, D.C.: January 28, 2005. Homeland Security: Management Challenges Remain in Transforming Immigration Programs. GAO-05-81. Washington, D.C.: October 14, 2004. Immigration Enforcement: DHS Has Incorporated Immigration Enforcement Objectives and Is Addressing Future Planning Requirements. GAO-05-66. Washington, D.C.: October 8, 2004. Homeland Security: Performance of Information System to Monitor Foreign Students and Exchange Visitors Has Improved, but Issues Remain. GAO-04-690. Washington, D.C.: June 18, 2004. Investigations of Terrorist Financing, Money Laundering, and Other Financial Crimes. GAO-04-464R. Washington, D.C.: February 20, 2004. Combating Money Laundering: Opportunities Exist to Improve the National Strategy. GAO-03-813. Washington, D.C.: September 26, 2003. Department of Homeland Security: Adjustment of the Immigration and Naturalization Benefit Application and Petition Fee Schedule. GAO-07-946R. Washington, D.C.: June 15, 2007. Immigration Benefits: Sixteenth Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-07-796R. Washington, D.C.: April 27, 2007. DHS Immigration Attorneys: Workload Analysis and Workforce Planning Efforts Lack Data and Documentation. GAO-07-206. Washington, D.C.: April 17, 2007. Foreign Physicians: Data on Use of J-1 Visa Waivers Needed to Better Address Physician Shortages. GAO-07-52. Washington, D.C.: November 30, 2006. Immigration Benefits: Fifteenth Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-07-168R. Washington, D.C.: November 9, 2006. Immigration Benefits: Additional Efforts Needed to Help Ensure Alien Files Are Located when Needed. GAO-07-85. Washington, D.C.: October 27, 2006. Estimating the Undocumented Population: A “Grouped Answers” Approach to Surveying Foreign-Born Respondents. GAO-06-775. Washington, D.C.: September 29, 2006. Executive Office for Immigration Review: Caseload Performance Reporting Needs Improvement. GAO-06-771. Washington, D.C.: August 11, 2006. H-1B Visa Program: More Oversight by Labor Can Improve Compliance with Program Requirements. GAO-06-901T. Washington, D.C.: June 22, 2006. H-1B Visa Program: Labor Could Improve Its Oversight and Increase Information Sharing with Homeland Security. GAO-06-720. Washington, D.C.: June 22, 2006. Immigration Benefits: Circumstances under Which Petitioners’ Sex Offenses May Be Disclosed to Beneficiaries. GAO-06-735. Washington, D.C.: June 14, 2006. Immigration Benefits: Fourteenth Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-06-589R. Washington, D.C.: April 21, 2006. Information Technology: Near-Term Effort to Automate Paper-Based Immigration Files Needs Planning Improvements. GAO-06-375. Washington, D.C.: March 31, 2006. International Remittances: Different Estimation Methodologies Produce Different Results. GAO-06-210. Washington, D.C.: March 28, 2006. Immigration Benefits: Additional Controls and a Sanctions Strategy Could Enhance DHS’ Ability to Control Benefit Fraud. GAO-06-259. Washington, D.C.: March 10, 2006. Social Security Administration: Procedures for Issuing Numbers and Benefits to the Foreign-Born. GAO-06-253T. Washington, D.C.: March 2, 2006. Immigration Benefits: Improvements Needed to Address Backlogs and Ensure Quality of Adjudications. GAO-06-20. Washington, D.C.: November 21, 2005. Immigration Benefits: Thirteenth Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-06-122R. Washington, D.C.: October 21, 2005. Taxpayer Information: Options Exist to Enable Data Sharing Between IRS and USCIS but Each Presents Challenges. GAO-06-100. Washington, D.C.: October 11, 2005. Immigration Services: Better Contracting Practices Needed at Call Centers. GAO-05-526. Washington, D.C.: June 30, 2005. Immigration Benefits: Twelfth Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-05-481R. Washington, D.C.: April 14, 2005. Immigrant Investors: Small Number of Participants Attributed to Pending Regulations and Other Factors. GAO-05-256. Washington, D.C.: April 1, 2005. Immigration Benefits: Eleventh Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-04-1030R. Washington, D.C.: August 13, 2004. Taxpayer Information: Data Sharing and Analysis May Enhance Tax Compliance and Improve Immigration Eligibility Decisions. GAO-04-972T. Washington, D.C.: July 21, 2004. Illegal Alien Schoolchildren: Issues in Estimating State-by-State Costs. GAO-04-733. Washington, D.C.: June 21, 2004. Undocumented Aliens: Questions Persist about Their Impact on Hospitals’ Uncompensated Care Costs. GAO-04-472. Washington, D.C.: May 21, 2004. Immigration Application Fees: Current Fees Are Not Sufficient to Fund U.S. Citizenship and Immigration Services’ Operations. GAO-04-309R. Washington, D.C.: January 5, 2004. Immigration Benefits: Tenth Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-04-189R. Washington, D.C.: October 17, 2003. Social Security Administration: Actions Taken to Strengthen Procedures for Issuing Social Security Numbers to Noncitizens, but Some Weaknesses Remain. GAO-04-12. Washington, D.C.: October 15, 2003. Social Security Numbers: Improved SSN Verification and Exchange of States’ Driver Records Would Enhance Identity Verification. GAO-03-920. Washington, D.C.: September 15, 2003. H-1B Foreign Workers: Better Tracking Needed to Help Determine H-1B Program’s Effects on U.S. Workforce. GAO-03-883. Washington, D.C.: September 10, 2003. Supplemental Security Income: SSA Could Enhance Its Ability to Detect Residency Violations. GAO-03-724. Washington, D.C.: July 29, 2003. Immigration Benefits: Ninth Report Required by the Haitian Refugee Immigration Fairness Act of 1998. GAO-03-681R. Washington, D.C.: April 21, 2003. Aviation Security: Efforts to Strengthen International Passenger Prescreening Are Under Way, but Planning and Implementation Issues Remain. GAO-07-346. Washington, D.C.: May 16, 2007. Aviation Security: Federal Efforts to Secure U.S.-Bound Air Cargo Are in the Early Stages and Could Be Strengthened. GAO-07-660. Washington, D.C.: April 30, 2007. Aviation Security: TSA’s Change to Its Prohibited Items List Has Not Resulted in Any Reported Security Incidents, but the Impact of the Change on Screening Operations Is Inconclusive. GAO-07-634R. Washington, D.C.: April 25, 2007. Aviation Security: Risk, Experience, and Customer Concerns Drive Changes to Airline Passenger Screening Procedures, but Evaluation and Documentation of Proposed Changes Could Be Improved. GAO-07-634. Washington, D.C.: April 16, 2007. Aviation Security: Cost Estimates Related to TSA Funding of Checked Baggage Screening Systems at Los Angeles and Ontario Airports. GAO-07-445. Washington, D.C.: March 30, 2007. Aviation Security: TSA’s Staffing Allocation Model Is Useful for Allocating Staff among Airports, but Its Assumptions Should Be Systematically Reassessed. GAO-07-299. Washington, D.C.: February 28, 2007. Aviation Security: Progress Made in Systematic Planning to Guide Key Investment Decisions, but More Work Remains. GAO-07-448T. Washington, D.C.: February 13, 2007. Transportation Security Administration: Oversight of Explosive Detection Systems Maintenance Contracts Can Be Strengthened. GAO-06-795. Washington D.C.: July 31, 2006. Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened. GAO-06-869. Washington, D. C.: July 28, 2006. Aviation Security: TSA Has Strengthened Efforts to Plan for the Optimal Deployment of Checked Baggage Screening Systems, but Funding Uncertainties Remain. GAO-06-875T. Washington, D.C.: June 29, 2006. Aviation Security: Management Challenges Remain for the Transportation Security Administration’s Secure Flight Program. GAO-06-864T. Washington, D.C.: June 14, 2006. Aviation Security: Further Study of Safety and Effectiveness and Better Management Controls Needed if Air Carriers Resume Interest in Deploying Less-than-Lethal Weapons. GAO-06-475. Washington, D.C.: May 26, 2006. Aviation Security: Transportation Security Administration Has Made Progress in Managing a Federal Workforce and Ensuring Security at U.S. Airports, but Challenges Remain. GAO-06-597T. Washington, D.C.: April 4, 2006. Aviation Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain. GAO-06-371T. Washington, D.C.: April 4, 2006. Aviation Security: Progress Made to Set Up Program Using Private- Sector Airport Screeners, but More Work Remains. GAO-06-166. Washington, D. C.: March 31, 2006. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: February 9, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls. GAO-06-203. Washington, D.C.: November 28, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: October 17, 2005. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: September 23, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: September 6, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information during Secure Flight Program Testing Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005. Aviation Security: Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356. Washington, D.C.: March 28, 2005. Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: March 15, 2005. Aviation Security: Measures for Testing the Impact of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: February 23, 2005. Transportation Security: Systematic Planning Needed to Prioritize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Aviation Security: Preliminary Observations on TSA’s Progress to Allow Airports to Use Private Passenger and Baggage Screening. GAO-05-126. Washington, D.C.: November 19, 2004. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: November 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: June 4, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04-785T. Washington, D.C.: May 19, 2004. Aviation Security: Private Security Screening Contractors Have Little Flexibility to Implement Innovative Approaches. GAO-04-505T. Washington, D.C.: April 22, 2004. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: March 17, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 13, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 12, 2004. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: February 12, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: November 20, 2003. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D. C.: November 19, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: November 5, 2003. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: September 24, 2003. Aviation Security: Progress since September 11, 2001, and the Challenges Ahead. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D. C.: April 1, 2003. Passenger Rail Security: Federal Strategy and Enhanced Coordination Needed to Prioritize and Guide Security Efforts. GAO-07-583T. Washington, D.C.: March 7, 2007. Passenger Rail Security: Federal Strategy and Enhanced Coordination Needed to Prioritize and Guide Security Efforts. GAO-07-459T. Washington, D.C.: February 13, 2007. Passenger Rail Security: Federal Strategy and Enhanced Coordination Needed to Prioritize and Guide Security Efforts. GAO-07-442T. Washington, D.C.: February 6, 2007. Passenger Rail Security: Enhanced Leadership Needed to Prioritize and Guide Security Efforts. GAO-07-225T. Washington, D.C.: January 18, 2007. Passenger Rail Security: Evaluating Foreign Security Practices and Risk Can Help Guide Security Efforts. GAO-06-557T. Washington, D.C.: March 29, 2006. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-06-181T. Washington, D.C.: October 20, 2005. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-05-851. Washington, D.C.: September 9, 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Transportation Security R&D: TSA and DHS Are Researching and Developing Technologies, but Need to Improve R&D Management. GAO-04-890. Washington, D.C.: September 30, 2004. Surface Transportation: Many Factors Affect Investment Decisions. GAO-04-744. Washington, D.C.: June 30, 2004. Rail Security: Some Actions Taken to Enhance Passenger and Freight Rail Security, but Significant Challenges Remain. GAO-04-598T. Washington, D.C.: March 23, 2004. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Transportation Security Research: Coordination Needed in Selecting and Implementing Infrastructure Vulnerability Assessments. GAO-03-502. Washington, D.C.: May 1, 2003. Rail Safety and Security: Some Actions Already Taken to Enhance Rail Security, but Risk-based Plan Needed. GAO-03-435. Washington, D.C.: April 30, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. New York City: April 1, 2003. Information on Port Security in the Caribbean Basin. GAO-07-804R. Washington, D.C.: June 29, 2007. Maritime Security: Observations on Selected Aspects of the SAFE Port Act. GAO-07-754T. Washington, D.C.: April 26, 2007. Transportation Security: TSA Has Made Progress in Implementing the Transportation Worker Identification Credential Program, but Challenges Remain. GAO-07-681T. Washington, D.C.: April 12, 2007. Port Risk Management: Additional Federal Guidance Would Aid Ports in Disaster Planning and Recovery. GAO-07-412. Washington, D.C.: March 28, 2007. Maritime Security: Public Safety Consequences of a Liquefied Natural Gas Spill Need Clarification. GAO-07-633T. Washington, D.C.: March 21, 2007. Combating Nuclear Smuggling: DHS’s Decision to Procure and Deploy the Next Generation of Radiation Detection Equipment Is Not Supported by Its Cost-Benefit Analysis. GAO-07-581T. Washington, D.C.: March 14, 2007. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Program. GAO-07-347R. Washington, D.C.: March 9, 2007. Maritime Security: Public Safety Consequences of a Terrorist Attack on a Tanker Carrying Liquefied Natural Gas Need Clarification. GAO-07-316. Washington, D.C.: February 22, 2007. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors Was Not Based on Available Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Transportation Security: DHS Should Address Key Challenges before Implementing the Transportation Worker Identification Credential Program. GAO-06-982. Washington, D.C.: September 29, 2006. Maritime Security: Information sharing Efforts Are Improving. GAO-06-933T. Washington, D.C.: July 10, 2006. Coast Guard: Observations on Agency Performance, Operations and Future Challenges. GAO-06-448T. Washington, D.C.: June 15, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Maritime Security: Enhancements Made, but Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. Coast Guard: Observations on Agency Priorities in Fiscal Year 2006 Budget Request. GAO-05-364T. Washington, D.C.: March 17, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington, D.C.: December 10, 2004. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: September 30, 2004. Maritime Security: Partnering Could Reduce Federal Costs and Facilitate Implementation of Automatic Vessel Identification System. GAO-04-868. Washington, D.C.: July 23, 2004. Maritime Security: Substantial Work Remains to Translate New Planning Requirements into Effective Port Security. GAO-04-838. Washington, D.C.: June 30, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Coast Guard: Relationship between Resources Used and Results Achieved Needs to Be Clearer. GAO-04-432. Washington, D.C.: March 22, 2004. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Coast Guard: Comprehensive Blueprint Needed to Balance and Monitor Resource Use and Measure Performance for All Missions. GAO-03-544T. Washington, D.C.: March 12, 2003. Preliminary Information on Rebuilding Efforts in the Gulf Coast. GAO-07-809R. Washington, D.C.: June 29, 2007. Emergency Management: Most School Districts Have Developed Emergency Management Plans, but Would Benefit from Additional Federal Guidance. GAO-07-609. Washington, D.C.: June 12, 2007. Emergency Management: Status of School Districts’ Planning and Preparedness. GAO-07-821T. Washington, D.C.: May 17, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-835T. Washington, D.C: May 15, 2007. First Responders: Much Work Remains to Improve Communications Interoperability. GAO-07-301. Washington, D.C.: April 2, 2007. Emergency Preparedness: Current Emergency Alert System Has Limitations, and Development of a New Integrated System Will Be Challenging. GAO-07-0411. Washington, D.C.: March 30, 2007. Hurricanes Katrina and Rita Disaster Relief: Continued Findings of Fraud, Waste, and Abuse. GAO-07-300. Washington, D.C.: March 15, 2007. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. Washington, D.C.: February 28, 2007. Homeland Security Grants: Observations on Process DHS Used to Allocate Funds to Selected Urban Areas. GAO-07-381R. Washington, D.C.: February 7, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations, GAO-07-139. Washington, D.C.: January 19, 2007. Transportation-Disadvantaged Populations: Actions Needed to Clarify Responsibilities and Increase Preparedness for Evacuations. GAO-07-44. Washington, D.C.: December 22, 2006. Homeland Security: Assessment of the National Capital Region Strategic Plan. GAO-06-1096T. Washington, D.C.: September 28, 2006. Hurricanes Katrina and Rita: Unprecedented Challenges Exposed the Individuals and Households Program to Fraud and Abuse; Actions Needed to Reduce Such Problems in the Future. GAO-06-1013. Washington, D.C.: September 27, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. Washington, D.C.: July 31, 2006. Child Welfare: Federal Action Needed to Ensure States Have Plans to Safeguard Children in the Child Welfare System Displaced by Disasters. GAO-06-944. Washington, D.C.: July 28, 2006. Disaster Preparedness: Limitations in Federal Evacuation Assistance for Health Facilities Should Be Addressed. GAO-06-826. Washington, D.C.: July 20, 2006. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-957T. Washington, D.C.: July 19, 2006. Individual Disaster Assistance Programs: Framework for Fraud Prevention, Detection, and Prosecution. GAO-06-954T. Washington, D.C.: July 12, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-655. Washington, D.C.: June 16, 2006. Hurricanes Katrina and Rita: Improper and Potentially Fraudulent Individual Assistance Payments Estimated to Be between $600 Million and $1.4 Billion. GAO-06-844T. Washington, D.C.: June 14, 2006. Hurricanes Katrina and Rita: Coordination between FEMA and the Red Cross Should Be Improved for the 2006 Hurricane Season. GAO-06-712. Washington, D.C.: June 8, 2006. U.S. Tsunami Preparedness: Federal and State Partners Collaborate to Help Communities Reduce Potential Impacts, but Significant Challenges Remain. GAO-06-519. Washington, D.C.: June 5, 2006. Disaster Preparedness: Preliminary Observations on the Evacuation of Vulnerable Populations due to Hurricanes and Other Disasters. GAO-06-790T. Washington, D.C.: May 18, 2006. Continuity of Operations: Selected Agencies Could Improve Planning for Use of Alternate Facilities and Telework during Disruptions. GAO-06-713. Washington, D.C.: May 11, 2006. Federal Emergency Management Agency: Factors for Future Success and Issues to Consider for Organizational Placement. GAO-06-746T. Washington, D.C.: May 9, 2006. Hurricane Katrina: Improving Federal Contracting Practices in Disaster Recovery Operations. GAO-06-714T. Washington, D.C.: May 4, 2006. Hurricane Katrina: Planning for and Management of Federal Disaster Recovery Contracts. GAO-06-622T. Washington, D.C.: April 10, 2006. Hurricane Katrina: Comprehensive Policies and Procedures Are Needed to Ensure Appropriate Use of and Accountability for International Assistance. GAO-06-460. Washington, D.C.: April 6, 2006. Homeland Security: The Status of Strategic Planning in the National Capital Region. GAO-06-559T. Washington, D.C.: March 29, 2006. Agency Management of Contractors Responding to Hurricanes Katrina and Rita. GAO-06-461R. Washington, D.C.: March 15, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-403T. Washington, D.C.: February 13, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Hurricanes Katrina and Rita: Provision of Charitable Assistance. GAO-06-297T. Washington, D.C.: December 13, 2005. Hurricanes Katrina and Rita: Preliminary Observations on Contracting for Response and Recovery Efforts. GAO-06-246T. Washington, D.C.: November 8, 2005. Hurricanes Katrina and Rita: Contracting for Response and Recovery Efforts. GAO-06-235T. Washington, D.C.: November 2, 2005. Federal Emergency Management Agency: Oversight and Management of the National Flood Insurance Program. GAO-06-183T. Washington, D.C. October 20, 2005. Federal Emergency Management Agency: Challenges Facing the National Flood Insurance Program. GAO-06-174T. Washington, D.C.: October 18, 2005. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Hurricane Katrina: Providing Oversight of the Nation’s Preparedness, Response, and Recovery Activities. GAO-05-1053T. Washington, D.C.: September 28, 2005. Homeland Security: Managing First Responder Grants to Enhance Emergency Preparedness in the National Capital Region. GAO-05-889T. Washington, D.C.: July 14, 2005. Flood Map Modernization: Federal Emergency Management Agency’s Implementation of a National Strategy. GAO-05-894T. Washington, D.C.: July 12, 2005. Homeland Security: DHS’s Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. National Flood Insurance Program: Oversight of Policy Issuance and Claims. GAO-05-532T. Washington, D.C.: April 14, 2005. Homeland Security: Management of First Responder Grant Programs and Efforts to Improve Accountability Continue to Evolve. GAO-05-530T. Washington, D.C.: April 12, 2005. Homeland Security: Management of First Responder Grant Programs Has Improved, but Challenges Remain. GAO-05-121. Washington, D.C.: February 2, 2005. Homeland Security: Federal Leadership and Intergovernmental Cooperation Required to Achieve First Responder Interoperable Communications. GAO-04-740. Washington, D.C.: July 20, 2004. Homeland Security: Management of First Responder Grants in the National Capital Region Reflects the Need for Coordinated Planning and Performance Goals. GAO-04-433. Washington, D.C.: May 28, 2004. Project SAFECOM: Key Cross-Agency Emergency Communications Effort Requires Stronger Collaboration. GAO-04-494. Washington, D.C.: April 16, 2004. Flood Map Modernization: Program Strategy Shows Promise, but Challenges Remain. GAO-04-417. Washington, D.C.: March 31, 2004. Continuity of Operations: Improved Planning Needed to Ensure Delivery of Essential Government Services. GAO-04-160. Washington, D.C.: February 27, 2004. September 11: Overview of Federal Disaster Assistance to the New York City Area. GAO-04-72. Washington, D.C.: October 31, 2003. Disaster Assistance: Information on FEMA’s Post 9/11 Public Assistance to the New York City Area. GAO-03-926. Washington, D.C.: August 29, 2003. Flood Insurance: Challenges Facing the National Flood Insurance Program. GAO-03-606T. Washington, D.C.: April 1, 2003. Information Technology: Homeland Security Information Network Needs to Be Better Coordinated with Key State and Local Initiatives. GAO-07-822T. Washington, D.C.: May 10, 2007. Information Technology: Numerous Federal Networks Used to Support Homeland Security Need to Be Better Coordinated with Key State and Local Information sharing Initiatives. GAO-07-455. Washington, D.C.: April 16, 2007. DHS Multi-Agency Operation Centers Would Benefit from Taking Further Steps to Enhance Collaboration and Coordination. GAO-07-686R. Washington, D.C.: April 5, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Passenger Rail Security: Federal Strategy and Enhanced Coordination Needed to Prioritize and Guide Security Efforts. GAO-07-583T. Washington, D.C.: March 7, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Homeland Security Grants: Observations on Process DHS Used to Allocate Funds to Selected Urban Areas. GAO-07-381R. Washington, D.C.: February 7, 2007. Homeland Security: Opportunities Exist to Enhance Collaboration at 24/7 Operations Centers Staffed by Multiple DHS Agencies. GAO-07-89. Washington, D.C.: October 20, 2006. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Information Security: Coordination of Federal Cyber Security Research and Development. GAO-06-811. Washington, D.C.: September 29, 2006. Critical Infrastructure Protection: DHS Leadership Needed to Enhance Cybersecurity. GAO-06-1087T. Washington, D.C.: September 13, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Homeland Security: DHS Is Addressing Security at Chemical Facilities, but Additional Authority Is Needed. GAO-06-899T. Washington, D.C.: June 21, 2006. Internet Infrastructure: DHS Faces Challenges in Developing a Joint Public/Private Recovery Plan. GAO-06-672. Washington, D.C.: June 16, 2006. Homeland Security: Guidance and Standards Are Needed for Measuring the Effectiveness of Agencies’ Facility Protection Efforts. GAO-06-612. Washington, D.C: May 31, 2006. Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information. GAO-06-383. Washington, D.C.: April 17, 2006. Securing Wastewater Facilities: Utilities Have Made Important Upgrades but Further Improvements to Key System Components May Be Limited by Costs and Other Constraints. GAO-06-390. Washington, D.C.: March 31, 2006. Information Sharing: The Federal Government Needs to Establish Policies and Processes for Sharing Terrorism-Related and Sensitive but Unclassified Information. GAO-06-385. Washington, D.C.: March 17, 2006. Homeland Security: DHS Is Taking Steps to Enhance Security at Chemical Facilities, but Additional Authority Is Needed. GAO-06-150. Washington, D.C.: January 27, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Critical Infrastructure Protection: Challenges in Addressing Cybersecurity. GAO-05-827T. Washington, D.C.: July 19, 2005. Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities. GAO-05-434. Washington, D.C.: May 26, 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05-327. Washington, D.C.: March 28, 2005. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington, D.C.: March 8, 2005. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Technology Assessment: Cybersecurity for Critical Infrastructure Protection. GAO-04-321. Washington, D.C.: May 28, 2004. Critical Infrastructure Protection: Establishing Effective Information Sharing with Infrastructure Sectors. GAO-04-699T. Washington, D.C.: April 21, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-628T. Washington, D.C.: March 30, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-354. Washington, D.C.: March 15, 2004. Posthearing Questions from the September 17, 2003, Hearing on Implications of Power Blackouts for the Nation's Cybersecurity and Critical Infrastructure Protection: The Electric Grid, Critical Interdependencies, Vulnerabilities, and Readiness. GAO-04-300R. Washington, D.C.: December 8, 2003. Drinking Water: Experts' Views on How Future Federal Funding Can Best Be Spent to Improve Security. GAO-04-29. Washington, D.C.: October 31, 2003. Critical Infrastructure Protection: Challenges in Securing Control Systems. GAO-04-140T. Washington, D.C.: October 1, 2003. Information Security: Progress Made, But Challenges Remain to Protect Federal Systems and the Nation's Critical Infrastructures. GAO-03-564T. Washington, D.C.: April 8, 2003. Homeland Security: Voluntary Initiatives Are Under Way at Chemical Facilities, but the Extent of Security Preparedness Is Unknown. GAO-03-439. Washington, D.C.: March 14, 2003. Department of Homeland Security: Science and Technology Directorate's Expenditure Plan. GAO-07-868. Washington, D.C.: June 22, 2007. Combating Nuclear Smuggling: DHS’s Decision to Procure and Deploy the Next Generation of Detection Equipment Is Not Supported by Its Cost-Benefit Analysis. GAO-07-581T. Washington, D.C.: March 14, 2007. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Program. GAO-07-347R. Washington, D.C.: March 9, 2007. Homeland Security: DHS Needs to Improve Ethics-Related Management Controls for the Science and Technology Directorate. GAO-06-206. Washington, D.C.: December 22, 2006. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors Was Not Based on Available Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Combating Nuclear Terrorism: Federal Efforts to Respond to Nuclear and Radiological Threats and to Protect Emergency Response Capabilities Could Be Strengthened. GAO-06-1015. Washington, D.C.: September 21, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problem Challenge U.S. Effort to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Transportation Security R&D: TSA and DHS Are Researching and Developing Technologies, but Need to Improve R&D Management. GAO-04-890. Washington, D.C.: September 30, 2004. Homeland Security: DHS Needs a Strategy to Use DOE’s Laboratories for Research on Nuclear, Biological, and Chemical Detection and Response Technologies. GAO-04-653. Washington, D.C.: May 24, 2004. Coast Guard: Challenges Affecting Deepwater Asset Deployment and Management and Efforts to Address Them. GAO-07-874. Washington, D.C.: June 18, 2007. Department of Homeland Security: Progress and Challenges in Implementing the Department’s Acquisition Oversight Plan. GAO-07-900. Washington, D.C.: June 13, 2007. Department of Homeland Security: Ongoing Challenges in Creating an Effective Acquisition Organization. GAO-07-948T. Washington, D.C.: June 7, 2007. Homeland Security: Observations on the Department of Homeland Security’s Acquisition Organization and on the Coast Guard’s Deepwater Program. GAO-07-453T. Washington, D.C.: February 8, 2007. Interagency Contracting: Improved Guidance, Planning, and Oversight Would Enable the Department of Homeland Security to Address Risks. GAO-06-996. Washington, D.C.: September 27, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Homeland Security: Challenges in Creating an Effective Acquisition Organization. GAO-06-1012T. Washington, D.C.: July 27, 2006. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764. Washington, D.C.: June 23, 2006. Coast Guard: Changes to Deepwater Appear Sound, and Program Management Has Improved, But Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T. Washington, D.C.: June 21, 2005. Homeland Security: Successes and Challenges in DHS’s Efforts to Create an Effective Acquisition Organization. GAO-05-179. Washington, D.C.: March 29, 2005. Homeland Security: Further Action Needed to Promote Successful Use of Special DHS Acquisition Authority. GAO-05-136. Washington, D.C.: December 15, 2004. Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Contract Management: INS Contracting Weaknesses Need Attention from the Department of Homeland Security. GAO-03-799. Washington, D.C.: July 25, 2003. Purchase Cards: Control Weaknesses Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-1117. Washington, D.C.: September 28, 2006. Internal Control: Analysis of Joint Study on Estimating the Costs and Benefits of Rendering Opinions on Internal Control over Financial Reporting in the Federal Environment. GAO-06-255R. Washington, D.C.: September 6, 2006. Financial Management: Challenges Continue in Meeting Requirements of the Improper Payments Information Act. GAO-06-581T. Washington, D.C.: April 5, 2006. Financial Management Systems: DHS Has an Opportunity to Incorporate Best Practices in Modernization Efforts. GAO-06-553T. Washington, D.C.: March 29, 2006. Financial Management Systems: Additional Efforts Needed to Address Key Causes of Modernization Failures. GAO-06-184. Washington, D.C.: March 15, 2006. Financial Management: Challenges Remain in Meeting Requirements of the Improper Payments Information Act. GAO-06-482T. Washington, D.C.: March 9, 2006. CFO Act of 1990: Driving the Transformation of Federal Financial Management. GAO-06-242T. Washington, D.C.: November 17, 2005. Financial Management: Achieving FFMIA Compliance Continues to Challenge Agencies. GAO-05-881. Washington, D.C.: September 20, 2005. Financial Audit: The Department of Homeland Security’s Fiscal Year 2004 Management Representation Letter on Its Financial Statements. GAO-05-600R. Washington, D.C.: July 14, 2005. Financial Management: Challenges in Meeting Requirements of the Improper Payments Information Act. GAO-05-417. Washington, D.C.: March 31, 2005. Financial Management: Effective Internal Control Is Key to Accountability. GAO-05-321T. Washington, D.C.: February 16, 2005. Financial Management: Improved Financial Systems Are Key to FFMIA Compliance. GAO-05-20. Washington, D.C.: October 1, 2004 Financial Management: Department of Homeland Security Faces Significant Financial Management Challenges. GAO-04-774. Washington, D.C.: July 19, 2004. Department of Homeland Security: Financial Management Challenges. GAO-04-945T. Washington, D.C.: July 8, 2004. Financial Management: Recurring Financial Systems Problems Hinder FFMIA Compliance. GAO-04-209T. Washington, D.C.: October 29, 2003 Department of Homeland Security: Challenges and Steps in Establishing Sound Financial Management. GAO-03-1134T. Washington, D.C.: September 10, 2003. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-833T. Washington, D.C.: May 10, 2007. Homeland Security: Information on Training New Border Patrol Agents. GAO-07-540R. Washington, D.C.: March 30, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-452T. Washington, D.C.: February 7, 2007. Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations. GAO-07-139. Washington, D.C.: January 19, 2007. Department of Homeland Security: Strategic Management of Training Important for Successful Transformation. GAO-05-888. Washington, D.C.: September 23, 2005. Human Capital: Observations on Final DHS Human Capital Regulations. GAO-05-391T. Washington, D.C.: March 2, 2005. Human Capital: DHS Faces Challenges In Implementing Its New Personnel System. GAO-04-790. Washington, D.C.: June 18, 2004. Human Capital: DHS Personnel System Design Effort Provides for Collaboration and Employee Participation. GAO-03-1099. Washington, D.C.: September 30, 2003. Homeland Security: DHS Enterprise Architecture Continues to Evolve but Improvements Needed. GAO-07-564. Washington, D.C.: May 9, 2007. Information Technology: DHS Needs to Fully Define and Implement Policies and Procedures for Effectively Managing Investments. GAO-07-424. Washington, D.C.: April 27, 2007. Homeland Security: Planned Expenditures for U.S. Visitor and Immigrant Status Program Need to Be Adequately Defined and Justified. GAO-07-278. Washington, D.C.: February 14, 2007. Enterprise Architecture: Leadership Remains Key to Establishing and Leveraging Architectures for Organizational Transformation. GAO-06-831. Washington, D.C.: August 14, 2006. Information Technology: Immigration and Customs Enforcement Is Beginning to Address Infrastructure Modernization Program Weaknesses, but Key Improvements Still Needed. GAO-06-823. Washington, D.C.: July 27, 2006. Information Technology: Customs Has Made Progress on Automated Commercial Environment System, but It Faces Long-Standing Management Challenges and New Risks. GAO-06-580. Washington, D.C.: May 31, 2006. Homeland Security Progress Continues but Challenges Remain on Department’s Management of Information Technology. GAO-06-598T. Washington, D.C.: March 29, 2006. Information Technology: Management Improvements Needed on Immigration and Customs Enforcement’s Infrastructure Modernization Program. GAO-05-805. Washington, D.C.: September 7, 2005. Information Security: Department of Homeland Security Needs to Fully Implement Its Security Program. GAO-05-700. Washington, D.C: June 17, 2005. Information Security: Department of Homeland Security Faces Challenges in Fulfilling Statutory Requirements. GAO-05-567T. Washington, D.C.: April 14, 2005. Information Technology: Customs Automated Commercial Environment Program Progressing, but Need for Management Improvements Continues. GAO-05-267. Washington, D.C.: March 14, 2005. Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program. GAO-05-202. Washington, D.C.: February 23, 2005. Department of Homeland Security: Formidable Information and Technology Management Challenge Requires Institutional Approach. GAO-04-702. Washington, D.C.: August 27, 2004. Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains. GAO-04-777. Washington, D.C.: August 6, 2004. Information Technology: Homeland Security Should Better Balance Need for System Integration Strategy with Spending for New and Enhanced Systems. GAO-04-509. Washington, D.C.: May 21, 2004. Information Technology: Early Releases of Customs Trade System Operating, but Pattern of Cost and Schedule Problems Needs to Be Addressed. GAO-04-719. Washington, D.C.: May 14, 2004. Information Technology: OMB and Department of Homeland Security Investment Reviews. GAO-04-323. Washington, D.C.: February 10, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. Washington, D.C.: September 19, 2003. Information Technology: A Framework for Assessing and Improving Enterprise Architecture Management (Version 1.1). GAO-03-584G. Washington, D.C.: April 2003. Federal Real Property: DHS Has Made Progress, but Additional Actions Are Needed to Address Real Property Management and Security Challenges. GAO-07-658. Washington, D.C.: June 22, 2007. Homeland Security: Guidance from Operations Directorate Will Enhance Collaboration among Departmental Operations Centers. GAO- 07-683T. Washington, D.C.: June 20, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-833T. Washington, D.C.: May 10, 2007. DHS Privacy Office: Progress Made but Challenges Remain in Notifying and Reporting to the Public. GAO-07-522. Washington, D.C.: April 27, 2007. Transportation Security: DHS Efforts to Eliminate Redundant Background Check Investigations, GAO-07-756. Washington, D.C.: April 26, 2007. Department of Homeland Security: Observations on GAO Access to Information on Programs and Activities. GAO-07-700T. Washington, D.C.: April 25, 2007. DHS Multi-Agency Operation Centers Would Benefit from Taking Further Steps to Enhance Collaboration and Coordination. GAO-07-686R. Washington, D.C.: April 5, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-452T. Washington, D.C.: February 7, 2007. Homeland Security: Management and Programmatic Challenges Facing the Department of Homeland Security. GAO-07-398T. Washington, D.C.: February 6, 2007. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: January 24, 2007. Terrorist Watch List Screening: Efforts to Help Reduce Adverse Effects on the Public. GAO-06-1031. Washington, D.C.: September 29, 2006. Combating Terrorism: Determining and Reporting Federal Funding Data. GAO-06-161. Washington, D.C.; January 17, 2006. Homeland Security: Overview of Department of Homeland Security Management Challenges. GAO-05-573T. Washington, D.C.: April 20, 2005. Results-Oriented Government: Improvements to DHS’s Planning Process Would Enhance Usefulness and Accountability. GAO-05-300. Washington, D.C.: March 31, 2005. September 11: Recent Estimates of Fiscal Impact of 2001 Terrorist Attack on New York. GAO-05-269. Washington, D.C.; March 30, 2005. Department of Homeland Security: A Comprehensive and Sustained Approach Needed to Achieve Management Integration. GAO-05-139. Washington, D.C.; March 16, 2005. Homeland Security: Observations on the National Strategies Related to Terrorism. GAO-04-1075T. Washington, D.C.: September 22, 2004. Homeland Security: Effective Regional Coordination Can Enhance Emergency Preparedness. GAO-04-1009. Washington, D.C.: September 15, 2004. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. 9/11 Commission Report: Reorganization, Transformation, and Information Sharing. GAO-04-1033T. Washington, D.C.: August 3, 2004. The Chief Operating Officer Concept and its Potential Use as a Strategy to Improve Management at the Department of Homeland Security. GAO-04-876R. Washington, D.C.: June 28, 2004. Homeland Security: Communication Protocols and Risk Communication Principles Can Assist in Refining the Advisory System. GAO-04-682. Washington, D.C.: June 25, 2004. Transfer of Budgetary Resources to the Department of Homeland Security (DHS). GAO-04-329R. Washington, D.C.: April 30, 2004. Homeland Security: Selected Recommendations from Congressionally Chartered Commissions and GAO. GAO-04-591. Washington, D.C.: March 31, 2004. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-1165T. Washington, D.C.: September 17, 2003. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-715T. Washington, D.C.: May 8, 2003. | The Department of Homeland Security's (DHS) recent 4 year anniversary provides an opportunity to reflect on the progress DHS has made since its establishment. DHS began operations in March 2003 with the mission to prevent terrorist attacks within the United States, reduce vulnerabilities, minimize damages from attacks, and aid in recovery efforts. GAO has reported that the creation of DHS was an enormous management challenge and that the size, complexity, and importance of the effort made the challenge especially daunting and critical to the nation's security. Our prior work on mergers and acquisitions found that successful transformations of large organizations, even those faced with less strenuous reorganizations than DHS, can take at least 5 to 7 years to achieve. GAO was asked to report on DHS's progress in implementing its mission and management areas and challenges DHS faces. This report also discusses key themes that have affected DHS's implementation efforts. At the time of its creation in 2003 as one of the largest federal reorganizations in the last several decades, we designated the implementation and transformation of DHS as a high-risk area due to the magnitude of the challenges it confronted in areas vital to the physical and economic well being of the nation. After 4 years into its overall integration effort, DHS has attained some level of progress in all of its mission and management areas. The rate of progress, however, among these areas varies. Key underlying themes have affected DHS's implementation efforts, and will be essential for the department to address as it moves forward. These include management, risk management, information sharing, and partnerships and coordination. For example, while DHS has made progress in transforming its component agencies into a fully functioning department, it has not yet addressed key elements of the transformation process, such as developing a comprehensive strategy for agency transformation and ensuring that management systems and functions are integrated. This lack of a comprehensive strategy and integrated management systems and functions limits DHS's ability to carry out its homeland security responsibilities in an effective, risk-based way. DHS also has not yet fully adopted and applied a risk management approach in implementing its mission and management functions. Some DHS component agencies, such as the Transportation Security Administration and the Coast Guard, have taken steps to do so, but DHS has not yet taken sufficient actions to ensure that this approach is used departmentwide. In addition, DHS has taken steps to share information and coordinate with homeland security partners, but has faced difficulties in these partnership efforts, such as in ensuring that the private sector receives better information on potential threats. Given DHS's dominant role in securing the homeland, it is critical that the department's mission and management programs are operating as efficiently and effectively as possible. DHS has had to undertake these responsibilities while also working to transform itself into a fully functioning cabinet department--a difficult task for any organization. As DHS moves forward, it will be important for the department to continue to develop more measurable goals to guide implementation efforts and to enable better accountability of its progress toward achieving desired outcomes. It will also be important for DHS to continually reassess its mission and management goals, measures, and milestones to evaluate progress made, identify past and emerging obstacles, and examine alternatives to address those obstacles and effectively implement its missions. |
If workers believe that they have been discriminated against in an employment matter, they may generally file a charge with EEOC, one of several federal agencies responsible for enforcing equal employment opportunity (EEO) laws and regulations. Under title VII of the Civil Rights Act of 1964, EEOC investigates—and may litigate, on its own behalf or on behalf of the charging party—charges of employment discrimination because of race, color, religion, sex, or national origin. EEOC has similar responsibility under the Age Discrimination in Employment Act of 1967, which prohibits employment discrimination against workers aged 40 and older; under the Equal Pay Act of 1963, which prohibits payment of different wages to men and women doing the same work; and under the Americans With Disabilities Act, which prohibits employment discrimination against workers with physical or mental disabilities. In April 1995, EEOC announced changes in the way it processes private-sector employment discrimination charges. As soon as guidance and implementation instructions are issued, EEOC will begin categorizing charges according to three priorities. The first category is for charges that appear more likely than not to involve discrimination, and these charges will be fully investigated. The second category includes charges that appear to have some merit but will require additional evidence to determine whether a violation occurred. The third category includes charges that can be immediately dismissed without investigation. EEOC also announced that it will initiate in October 1995 a voluntary ADR program using mediation to handle some of its workplace discrimination charges. Under this planned program, some employees filing charges and their employers will work with a neutral mediator to settle discrimination disputes, rather than go through EEOC’s traditional investigative procedures. If the employer and employee fail to reach a resolution, the charge will be returned to EEOC’s regular caseload. If EEOC investigates the charge, it notifies the employer of the charge and requests information from the employer and any witnesses with direct knowledge of the incident that led to the discrimination charge. If the evidence obtained by the EEOC investigator does not show reasonable cause to believe discrimination occurred—for example, the employee was terminated for poor performance and not due to discrimination—EEOC dismisses the case after issuing a “no cause” finding and a right-to-sue letter. When the evidence shows that reasonable cause exists to believe discrimination occurred, EEOC tries conciliation. If conciliation attempts fail, EEOC may go to court on behalf of the employee, although it rarely chooses to do so. EEOC officials have said that the Commission lacks sufficient legal staff to significantly increase the number of cases it can litigate effectively. When EEOC decides not to go to court, it issues the employee a right-to-sue letter, which allows the employee to sue. While charges filed with EEOC may lead to legal relief for employees with valid claims, each charge results in costs to the employer, even though most are found to be in compliance with the law. Although the employee does not pay for the EEOC investigation, he or she may incur psychological costs while pursuing the claim, the average time of which was 328 days in fiscal year 1994. The federal government also incurs costs for each charge investigated. ADR approaches are being considered by employers because “almost any system is quicker, cheaper, and less harrowing than going to court,” according to an official of the Equal Employment Advisory Council, an employers’ group. Their concerns have recently increased as a result of (1) multimillion dollar jury awards to employees and (2) the provision in the Civil Rights Act of 1991 that permits punitive damages in cases of intentional discrimination under title VII of the Civil Rights Act of 1964 and the Americans With Disabilities Act. In addition, a 1991 U.S. Supreme Court decision upholding mandatory arbitration for statutory claims concerning employment disputes in the securities industry has led to consideration of arbitration in particular. Finally, some employers feel that ADR approaches can minimize the adversarial relationship between employer and employee resulting from such complaints. The Commission was appointed at the request of the President by the Secretary of Commerce and the Secretary of Labor to address three questions: What (if any) new methods or institutions should be encouraged, or required, to enhance workplace productivity through labor-management cooperation and employee participation? What (if any) changes should be made in the present legal framework and practices of collective bargaining to enhance cooperative behavior, improve productivity, and reduce conflict and delay? What (if anything) should be done to increase the extent to which workplace problems are directly resolved by the parties themselves, rather than through recourse to state and federal courts and governmental bodies? In researching this third question, the Commission considered the range of federal and state laws regulating the workplace, including those ensuring minimum wages and maximum hours; a safe and healthy workplace; secure and accessible pension and health benefits; adequate notice of plant closings and mass layoffs; unpaid family and medical leave; and bans on wrongful dismissal, as well as those outlawing discrimination on the basis of race, sex, religion, age, or disability. According to the Commission’s December 1994 report, both employers and employees agree that, if private arbitration is to serve as a legitimate form of private-sector enforcement of public employment law, arbitration policies must provide a neutral arbitrator who knows the laws in question and understands the concerns of the parties, a fair and simple method by which the employee can obtain the necessary information to present his or her claim, a fair method of cost-sharing between the employer and employee to ensure affordable access to the system for all employees, the right to independent representation if the employee wants it, a range of legal remedies equal to those available through litigation, a written opinion by the arbitrator explaining his or her rationale for the sufficient judicial review to ensure that the result is consistent with employment laws. The Commission noted, however, that most experts who had testified before it agreed that imposition of fairness standards must not turn arbitration into a second court system. In our review of employers’ arbitration policies, we found that some do not meet the fairness standards recently proposed by the Commission on the Future of Worker-Management Relations. Using the Commission’s six standards, we evaluated dispute resolution policies provided by 26 employers that reported using arbitration to resolve discrimination complaints by employees not covered under collective bargaining agreements. Most of these policies, which are discussed below, are recent: 15 had been implemented in the past 5 years. Almost 90 percent of employers that had more than 100 employees and filed EEO reports with EEOC in 1992 use at least one ADR approach to resolve discrimination complaints. The reported use of these approaches, which ranges from about 80 percent for fact finding to about 9 percent for external mediation, is shown in figure 1. Almost 40 percent of these employers use a trained mediator from within the company to help resolve disputes. Only about 10 percent of employers use arbitration. Arbitration was mandatory for all covered employees for about one-fourth to one-half of the employers using this approach. In addition to those firms whose policies include arbitration, 8.4 percent of employers with more than 100 employees that filed EEO reports with EEOC in 1992 reported that they are considering implementing a policy requiring arbitration of employee discrimination complaints. A dispute resolution policy frequently has a series of steps, such as those discussed below, that can be linked to different ADR approaches. Usually, a policy that includes arbitration has it as the final step. (See fig. 2 for an example of a dispute resolution system that includes arbitration.) In step 1, an employee with a complaint is encouraged to discuss the matter with his or her immediate supervisor. The employee and supervisor should make sincere, good faith efforts to resolve the matter. If the employee prefers not to present the matter directly to the immediate supervisor or if they cannot resolve the matter, the employee then discusses the matter with a representative of the establishment’s human resources department and decides whether to proceed to the next step. In step 2, the employee may request that a representative of the establishment’s human resources department conduct an assessment of the dispute and help the employee and supervisor reach a resolution. If resolution has not been reached, an employee may proceed to step 3 and request an investigation by a representative of the establishment’s human resources department. The results of the investigation are discussed with the appropriate senior manager and the employee. The senior manager decides how the complaint should be resolved. A decision letter is sent to both the employee and supervisor at the end of this step. An employee who is dissatisfied with the senior manager’s decision may request that the problem be reviewed by a review board, which is composed of an executive, a manager, and a representative from the corporate human resources office. The employee may request the help of an executive adviser in preparing for this step. At the end of step 4, the board will make a final company decision on the dispute’s merits, including corrective action, if appropriate. If an employee is dissatisfied with the board’s decision, he or she may submit the complaint to binding arbitration, which is step 5 of this company’s dispute resolution policy. An employee must give notice within 20 working days of the date the board reached its decision. The arbitration is to be administered in accordance with the procedures of the American Arbitration Association (AAA), a nonprofit organization that trains arbitrators and maintains lists of arbitrators who can be used to resolve different types of disputes, including labor-management and employment disputes. The arbitration will be heard by an arbitrator who is licensed to practice law in the state in which the arbitration takes place. Under this company’s policy, the employer and the employee share equally the fees and costs of the arbitrator, although the arbitrator may order the company to pay the employee’s costs in excess of 2 weeks’ salary if the employee demonstrates a continuing inability to pay his or her entire share. Larger employers with larger human resource and legal staffs might be assumed to be more likely to use arbitration. However, we found no statistically significant difference in use of arbitration based on business size. Figure 3 shows the percentage of businesses using arbitration by size. Since arbitration has long been a feature of grievance procedures in the collective bargaining arena, employers that have collective bargaining agreements with some of their workers might be more likely to use arbitration with those not covered by collective bargaining. Figure 4, which shows that businesses with some union workers are nearly three times as likely as those with no union workers to use arbitration, lends credence to this notion. In its final report, the Commission states that the arbitrator selection process should allow both the employer and the affected employee(s) to participate. The arbitrator should be selected from a roster of qualified arbitrators who have training and experience in the area of law covering the dispute being arbitrated and are certified by professional associations specializing in such dispute resolution. The process should ensure that rosters include significant numbers of women and minorities. Neither party should be able to limit the roster unilaterally to avoid the possibility that the arbitrator selected will be biased in favor of that party. While we did not evaluate the qualifications or demographics of the panels from which arbitrators would be chosen, we noted that in 22 of the 26 policies we examined, both the employee and employer are directly involved in selecting the arbitrator. In 12 policies, this is done with the help of AAA. Immediately after the complaint is filed, AAA simultaneously sends an identical list of people chosen from its panel of employment arbitrators to both the employer and the employee. The employer and the employee (1) strike any names they object to and (2) number the remaining names in order of preference. In a single arbitrator case, the employer and the employee may each strike up to three names. AAA chooses an arbitrator from among those approved on both lists in accordance with the designated order of preference. If no agreement is reached on any of the names, AAA makes the appointment from other members of the panel. In seven policies we reviewed, the employer and employee alternate striking names from a list. One policy rather vaguely calls for selection “based on the parties’ preferences.” In two policies, the employer selects the names on the list, but the employee is involved in selecting the arbitrator. In one of the remaining four policies, the employer unilaterally selects the arbitrator, while the other three do not discuss arbitrator selection. According to the Commission, employees should have the opportunity to gather the relevant information they need to support their legal claims. Employees pursuing a discrimination complaint, for example, should be granted access to their personnel files. Broader access to personnel files should also be available to employees bringing systemic discrimination claims. During arbitration, an employee with a complaint should be allowed at least one deposition, with a company official of the employee’s choosing. The arbitrator should be empowered to expand discovery (pretrial or prehearing procedure by which one party gains information held by the other) to include any material he or she finds valuable for resolving the dispute. Only three policies we reviewed discuss access to information. One policy states that discovery will be allowed and governed under the discovery rules of the state code of civil procedure unless otherwise agreed to by the parties; one policy provides for 2 days of depositions; and the remaining policy limits the taking of depositions to one company representative, two other persons, and one expert witness named by the company but also allows requests for documents related to the complaint. To ensure impartiality of the arbitrator, the Commission proposes that both the employee and the employer contribute to the arbitrator’s fee. Ideally, the employee contribution should be capped in proportion to the employee’s salary to avoid discouraging claims by low-wage workers. Seven policies do not address cost sharing. In four policies, the employer pays for all arbitration costs; costs are to be shared equally in nine policies; and the employee share is either capped or limited to less than half the costs in the remaining six policies. For example, one employer pays all costs in excess of $50. Another firm pays 80 percent of the arbitration costs, while the employee is responsible for 20 percent. According to the Commission, both employers and employees agree that fairness requires the right of independent representation if the employee wants it. AAA rules state that “any party may be represented by counsel or by any other representative.” Twenty-one of the policies we reviewed permit the employee to be represented by an attorney during arbitration. Four policies do not address representation. Only one policy specifically states that representation by an attorney will not be permitted. The Commission states that the introduction of a workplace arbitration system should not curb substantive employee protections. This means that private arbitration should offer employees the same array of remedies available in court. Arbitrators should be allowed to award whatever relief—including reinstatement, back pay, additional economic damages, punitive awards, injunctive relief, and attorney’s fees—would be available in court under the law in question. Eighteen of the 26 policies do not address legal remedies—such as monetary compensation—available to the arbitrator. Of the eight remaining policies, seven state that the arbitrator can use any remedy available under law, while one policy prohibits the arbitrator from assessing damages beyond those required to compensate for actual losses. The Commission states that the arbitrator should issue a written opinion that states the findings of fact and reasons that led to his or her decision. This opinion need not correspond in style or length to a court opinion. However, it should set out, in understandable terms, the basis for the arbitrator’s ruling. Ten policies do not address the form of the arbitrator’s decision. The remaining 16 policies require the arbitrator to provide a written ruling, but specific provisions of these policies vary considerably. For example, one policy requires the decision to “contain findings of fact and conclusions of law supporting the decision and the award,” while another states that the written opinion should not include findings of fact and conclusions of law unless requested by both the employer and the employee. According to the Commission, judicial review of an arbitrator’s ruling must ensure that the ruling reflects an appropriate understanding and interpretation of the relevant legal doctrines. A reviewing court should defer to an arbitrator’s findings of fact as long as it has substantial evidentiary basis. However, the reviewing court’s authoritative interpretation of the law should bind arbitrators much as it now binds administrative agencies and lower courts. For example, if an arbitration decision on a sexual harassment complaint disregards the standard set for such claims by the Supreme Court, the reviewing court should have the power to overturn the arbitration decision as inconsistent with current law. No policies require that the arbitration decision reflects an appropriate understanding and interpretation of relevant legal doctrines and be reviewable by a court on that basis. Sixteen policies call for the arbitration results to be “final and binding.” However, none of these policies specifically provide for judicial review. The remaining 10 policies do not address reviewing the arbitrator’s opinion. Almost all employers that had more than 100 employees and filed EEO reports with the EEOC in 1992 have established some sort of grievance procedure using one or more ADR approaches. However, relatively few use arbitration, and even fewer make it mandatory for employees. Existing arbitration policies vary greatly. If expected to conform with all the criteria for fairness recently proposed by the Commission on the Future of Worker-Management Relations, most would not do so. This is especially true when considering the criteria for an employee’s opportunity to obtain information for empowering the arbitrator to use remedies equal to those available under law and for providing that the arbitrator’s decision be subject to judicial review concerning the arbitrator’s interpretation of relevant legal doctrines. We are sending copies of this report to interested congressional committees, the Chairman of the Equal Employment Opportunity Commission, and other interested parties. Please call Cornelia Blanchette, Associate Director, on (202) 512-7014, or me if you or your staff have any questions. Other major contributors to this report are listed in appendix III. We designed a questionnaire to obtain information on the use of alternative dispute resolution (ADR) approaches by private-sector businesses to resolve discrimination complaints brought by employees not covered by collective bargaining agreements. We discussed development of this questionnaire with the Equal Employment Advisory Committee, a nonprofit association of employers; Chorda Conflict Management, Inc., an Austin, Texas, consulting firm that helps employers design dispute resolution systems; and the National Task Force on Civil Liberties in the Workplace of the American Civil Liberties Union. Before mailing our questionnaire, we pretested it with officials of five employers. Results of the pretests indicated that questions, terms, and definitions were generally familiar, clear, and free from confusion. During the face-to-face pretest, officials completed the questionnaire as if they had received it in the mail. Our staff recorded the time necessary to complete the survey and any difficulties that respondents experienced. Once the questionnaire was completed, we used a standardized series of questions to gain feedback on difficulties and questions encountered with each item. We surveyed a nationally representative sample of businesses with more than 100 employees in 1992, the most recent year for which data were available. To determine our universe, we used the 1992 EEO-1 data file maintained by the EEOC. This file consists of reports required to be filed by all businesses with more than 100 employees during the reporting period, as well as certain firms with fewer than 100 employees if they are government contractors. We deleted consolidated reports and reports from businesses that reported having less than 100 employees. This yielded a universe of about 87,500 businesses. We sent the survey to a sample of 2,000 businesses. The sample was selected from three different strata by size: 100 to 499 employees, 500 to 999 employees, and 1,000 or more employees. We sent questionnaires to random samples of businesses in each of the three strata. We obtained an overall response rate of 75.0 percent. Response rates for individual strata ranged from 63.6 percent to 80.0 percent. Table I.1 shows the universe of potential establishments, the sample size, and the number of establishments for which questionnaires were received by strata. As agreed with the requesters’ offices, we pledged that businesses’ responses would be kept confidential. A sample questionnaire showing aggregate responses and percentages appears in appendix II. We calculated sampling errors for estimates from this survey at the 95-percent confidence level. This means the chances are about 19 out of 20 that the actual percentage being estimated falls within the range covered by our estimate, plus or minus the sampling error. Sampling errors for estimates discussed in this report are shown in table I.2. Sampling error (percentage points) We weighted the data to account for different sampling rates and varying response rates among the strata. Therefore, our data reflect national estimates for businesses with more than 100 employees and are based on the assumption that the nonrespondents are similar to the respondents. To obtain more detailed information on dispute resolution policies, we then telephoned the 132 respondents that reported using arbitration to resolve discrimination complaints brought by workers not covered by a collective bargaining agreement. As shown in table I.3, we eventually received and analyzed 26 policies. The Congress has asked the U.S. General Accounting Office (GAO) to conduct a study of employers’ personnel policies, including arbitration, for resolving disputes that arise under federal equal employment statutes for employees not covered under collective bargaining agreements. These disputes arise from allegations of discrimination because of race, sex, religion, country of origin, age, or disability. 1. If yes, in responding to the following questions, please answer only for this establishment, that is, the corporate level of your organization. As part of our study, we are sending this questionnaire to a random sample of business establishments to collect information on the policies and practices they use to resolve employment discrimination disputes. This questionnaire should take about 15 minutes to complete. Most of the questions can be answered quickly and easily by checking boxes. If no, please respond only for this establishment even if it is part of a larger organization. 2. We will keep your responses to the questionnaire strictly confidential. Only those responsible for the analysis of the survey data will know how you have responded. When GAO reports the results of this survey, no questionnaire response will be attributed to any specific establishment. Your responses will be combined with those of other respondents and reported in the aggregate. About how many employees (full-time and part-time) does this business establishment currently employ? (Enter number)(n=1365) - 100-499 - 500-999 - 1000+ - 13.0% 63.5% 10.5% 13.0% 3. Are the employees at this business establishment covered or not covered by collective bargaining agreements? (Check one)(n=1496) For the purposes of this survey, we would like you to respond for this business establishment alone, even if it is part of a larger organization.If this establishment is a corporate headquarters, please respond only for the corporate headquarters level of the organization. If you have any questions about this questionnaire, please call Mr. Bob Sampson collect at (202) 512-7251. Thank you for your help. About what proportion of employees at this establishment are NOT covered by a collective bargaining agreement? (Enter percentage) 7. For the remainder of the questionnaire, please consider your employee discrimination complaint resolution policies and practices as they relate to only those employees who are NOT covered by a collective bargaining agreement. 2. 1.0% Only those in certain positions or certain 8. 9. For those employees who were hired before that date, does this policy begin to apply to them when their status changes (for example, promotion, transfer, position change)?(Check one)(n=5) Question in this section are about negotiation. By "negotiation," we mean a discussion of a complaint by the parties and, if appropriate, their counsel with the goal of setting the terms of a resolution. Negotiation does not require involvement of a neutral party. Negotiation could include an "open door" policy. 1. 65.6% Yes 2. 34.4% No 10. Does this establishment have a policy to use negotiation as a method to resolve discrimination complaints that arise under federal equal employment status? (Check One)(n=1448) Is this establishment considering instituting a policy to use negotiation to resolve discrimination complaints? (Check one)(n=301) 15. For this section, our questions are about fact finding. By "fact finding," we mean having a neutral party (either someone within the company or external to the company) investigate a complaint and develop findings that may form the basis for resolution. This would not include formal complaint investigations by government agencies, such as the Equal Employment Opportunity Commission (EEOC). Does this establishment have a policy to use fact finding as a method to resolve discrimination complaints that arise under federal equal employment statutes? (Check one)(n=1446) 16. Is the establishment considering instituting a policy to use fact finding to resolve discrimination complaints? (Check one)(n=241) 17. For this section, our questions are about peer review. By "peer review," we mean a panel of employees or employees and managers working together to resolve employment complaints. Does this policy apply to all those not covered by a collective bargaining agreement or only to those in certain positions or located in certain divisions or departments?(Check one)(n=1199) Does this establishment have a policy to use peer review as a method to resolve discrimination complaints?(Check one)(n=1447) 1. 99.2% All 2. 80.1% No 2. 0.8% Only those in certain positions or certain 18. Is this establishment considering instituting a policy to use peer review to resolve discrimination complaints?(Check one)(n=1136) Does this policy apply to only those who were hired on or after a certain date?(Check one) (n=1196) Does this policy apply to all those not covered by a collective bargaining agreement or only to those in certain positions or located in certain divisions or departments?(Check one)(n=305) 23. Questions in this section are about internal mediation. By "internal medication," we mean a process for resolving disputes in which a neutral party--trained in mediation techniques--from within the company helps the disputing parties negotiate a mutually acceptable agreement. This process does not involve an imposed solution. Does this policy apply to only those who were hired (n=305) on or after a certain date?(Check one) Does this establishment have a policy to use internal mediation as a method to resolve these discrimination complaints?(Check one)(n=1448) For those employees who were hired before that date, does this policy begin to apply to them when their status changes (for example, promotion, transfer, position change)?(Check one)(n=4) 24. Is this establishment considering instituting a policy to use internal mediation to resolve discrimination complaints?(Check one)(n=908) 2. 0% No 25. Is peer review voluntary for everyone it applies to, voluntary for some that it applies to, or mandatory for everyone that it applies to?(Check one)(n=300) Does this policy apply to all those not covered by a collective bargaining agreement or only to those in certain positions or located in certain division of department?(Check one)(n=524) 1. 69.2% Voluntary for all 1. 99.1% All 2. 5.0% Voluntary for some 2. 0.9% Only those in certain positions or certain 3. 25.7% Mandatory for all 26. For those employees who were hired before that date, does this policy begin to apply to them when their status changes (for example, promotion, transfer, position change)?(Check one)(n=5) 30. Is this establishment considering instituting a policy to use external mediation to resolve discrimination complaints?(check one)(n=1336) 2. 13.7% No 31. Is internal mediation voluntary for everyone it applies to, voluntary for some that applies to, or mandatory for everyone that it applies to?(Check one) (n=520) Does this policy apply to all those not covered by a collective bargaining agreement or only to those in certain positions or located in certain divisions or departments?(check one)(n=101) 1. 84.6% All 1. 75.0% Voluntary for all 2. 15.4% Only those in certain positions or certain 2. 1.9% Voluntary for some 3. 23.1% Mandatory for all 32. For this section, our questions are about external mediation. By "external mediation," we mean a process for resolving disputes in which a neutral party--trained in medication techniques--external to the company helps the disputing parties negotiate a mutually acceptable agreement. This process does not involve an imposed solution. 33. For those employees who were hired before that date, does this policy begin to apply to them when their status changes (for example, promotion, transfer, position change)?(check one)(n=4) Does this establishment have a policy to use external mediation as a method to resolve discrimination complaints?(check one)(n=1448) 34. 39. Questions in this section are about arbitration. By "arbitration," we mean having a neutral party (an arbitrator external to the company) decide how the complaint is to be resolved. The arbitrator’s decision is usually binding on both parties. Does this establishment have a policy to use arbitration as a method to resolve discrimination complaints?(check one)(n=1448) 40. Is this policy to use arbitration voluntary for everyone it applies to, voluntary for some that it applies to, or mandatory for everyone that it applies to?(check one)(n=126) Is this establishment considering instituting a policy to use arbitration to resolve discrimination complaints?(check one)(n=1307) 41. Does this business establishment use any other dispute resolution methods to resolve discrimination complaints?(check one)(n=1344) Does this policy apply to all those not covered by a collective bargaining agreement or only to those in certain positions or located in certain divisions or departments?(check one)(n=130) Of the discrimination complaints resolved in the pas year, about what proportion of these complaints were ultimately resolved by each method listed below?(Enter percentage) (n=846) (n=846) (n=850) (n=847) (n=851) (n=851) (n=847) Listed below are various methods this establishment may have used during the resolution process for those employees who are not covered by collective bargaining agreements. Once again, consider those disputes resolved in the part year.In about what proportion of these cases were each of these methods used during the resolution process? (Check one for each method) Cases in which the method was used (n=673) 16.8% 12.8% 10.7% 6.8% 12.5% (n=769) (n=521) (n=577) (n=496) (n=492) (n=381) Thank you for participating in this study. methods or any questions asked in the questionnaire, please write them in the space provided below. If you have any additional comments about employment dispute resolution (n=178) Please provide the following information about the person we should call if additional information or clarification is needed. Name of person to call: ) HEHS/SL/7-94 (205272) In addition to those named above, the following individuals made important contributions to this report: Susan Poling provided legal advice and analyzed the policies we received; Catherine Baltzell reviewed the technical sections of the report and wrote the technical appendix; Susan Lawes designed and pretested the survey questionnaire; Joel Grossman designed the telephone survey of employers who reported using arbitration; Patricia Bundy managed the questionnaire responses and the telephone survey; Joan Vogel analyzed the questionnaire responses; and Linda Stokes assisted with the telephone survey. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the: (1) extent to which private-sector employers use alternative dispute resolution (ADR) approaches in resolving discrimination complaints of employees not covered by collective bargaining agreements; and (2) fairness of private-sector employers' arbitration policies. GAO found that: (1) in fiscal year 1994, the Equal Employment Opportunity Commission (EEOC) received over 90,000 discrimination complaints from employees; (2) ADR approaches include negotiation, fact finding, peer review, internal mediation, external mediation, and arbitration; (3) almost all employers with more than 100 employees use one or more ADR approaches to resolve discrimination complaints; (4) some employers' arbitration policies do not meet the fairness standards proposed by the Commission on the Future of Worker-Management Relations; (5) almost 40 percent of private-sector employers use a trained mediator from within the company to help resolve disputes, and only 10 percent of these employers use arbitration; (6) firms that have some workers covered by collective bargaining agreements are more likely to use arbitration; and (7) arbitration is usually the final step in a grievance policy, which includes other ADR approaches. |
Established in 1800, the Library of Congress is the nation’s oldest federal cultural institution and serves as the research arm of Congress. Its mission is to support Congress in fulfilling its constitutional duties and to further the progress of knowledge and creativity for the benefit of the American people. The Library of Congress is the largest library in the world, with more than 158 million items on approximately 838 miles of bookshelves. The collections include more than 36 million books and other print materials, 3.5 million recordings, 13.7 million photographs, 5.5 million maps, 6.7 million pieces of sheet music, and 69 million manuscripts. The Library receives some 15,000 items each working day and adds approximately 12,000 items to its collections daily. These items are received through a variety of sources, including the copyright registration process, as the Library is home to the U.S. Copyright Office. Materials are also acquired through gift, purchase, other government agencies (state, local, and federal), Cataloging in Publication (a pre-publication arrangement with publishers), and exchange with libraries in the United States and abroad. Items not selected for the collections or other internal purposes are used in the Library’s national and international exchange programs. Through these exchanges the Library acquires material that would not be available otherwise. The remaining items not selected for collections or exchange programs are made available to other federal agencies and are then available for donation to educational institutions, public bodies, and nonprofit tax-exempt organizations in the United States. The Library collaborates with external communities nationally and internationally through, among other things, activities relating to preservation, research, and education. For example, the Library collects, preserves, and makes accessible first-hand accounts of U.S. veterans so that future generations may hear directly from veterans. Additionally, in collaboration with the United Nations Educational Scientific and Cultural Organization, as well as partner libraries and cultural institutions from around the world, the Library established the World Digital Library. This effort makes available on the Internet, free of charge, and in multilingual format significant primary materials from many countries and cultures. Further, the Library maintains Congress.gov, which is the official website for U.S. federal legislative information. Positioned within the legislative branch, the Library is led by the Librarian of Congress, who is nominated by the President and confirmed by the Senate. There have been 13 Librarians of Congress since the founding of the Library. The Deputy Librarian shares with the Librarian the overall responsibility for governing the Library and has the delegated authority to act on behalf of the Librarian. The Library encompasses several service and support units, including the following: Office of the Librarian: The Office of the Librarian has overall management responsibility for the Library and carries out certain executive functions. It includes the Office of the Chief Financial Officer, the Office of the General Counsel, the Congressional Relations Office, the Office of Communications, the Development Office, the Office of Contracts and Grants Management, and the Office of Special Events and Public Programs. Congressional Research Service (CRS): Established by statute in 1914,legislative research and analysis services. CRS is led by a Director, who is appointed by the Librarian in consultation with the Joint Committee on the Library and serves under the general direction of the Librarian of Congress. CRS is responsible for providing Congress with nonpartisan United States Copyright Office: Established by statute in 1897, the Copyright Office is responsible for administering the Copyright Act, including copyright registration, recordation, mandatory deposit, and certain statutory licenses. The office is led by the Register of Copyrights, who is appointed by and serves under the general direction of the Librarian of Congress. Law Library: Congress established its Law Library in 1832 to provide ready access to reliable legal materials. Library Services: Library Services develops and preserves the Library’s collections, which document the history and creativity of the American people in almost all media and formats and record the world’s knowledge in some 470 languages. Library Services also includes the National Library Service for the Blind and Physically Handicapped (NLS), which directs the production of books and magazines in Braille and recorded formats as well as specially designed audio playback equipment. Further, Library Services administers the Library’s six overseas offices—located in Brazil, Egypt, India, Indonesia, Kenya, and Pakistan. These offices are tasked with acquiring, cataloging, and preserving collections from developing countries. Office of Strategic Initiatives (OSI): The mission of OSI is to support the Library’s vision and strategy by directing the overall digital strategic planning for the Library and the national program for long- term preservation of digital cultural assets. This office includes ITS, which is to support the Library’s IT systems and infrastructure. Office of Support Operations (OSO): OSO is made up of several offices that provide essential infrastructure services to the entire Library. These include the Office of Opportunity, Inclusiveness, and Compliance; Integrated Support Services; Human Resource Services; and the Office of Security and Emergency Preparedness (OSEP). Figure 1 provides a simplified depiction of the Library’s organization. An Executive Committee, made up of the heads of the major service units of the Library and chaired by the Librarian, sets overall Library policy and practices, and advises the Librarian. For fiscal year 2014, the Library was appropriated $618,776,000 for its operations and was authorized to maintain 3,746 full-time equivalents. For fiscal year 2015, the Library was appropriated $630,853,000, and, for fiscal year 2016, the Library requested $666,629,000. Like other federal agencies, the Library relies on a host of IT systems to carry out its mission. These include standard hardware (e.g., desktop and laptop computers, printers, and servers) and software (e.g., e-mail, standard office productivity programs such as word processing and spreadsheet programs, and Internet resources) that Library employees use to carry out their day-to-day work. It also makes use of administrative and business systems, such as accounting, financial planning and budgeting, and human resources systems. A number of IT systems support Library-wide activities. For example: ITS Library of Congress Data Network: The ITS Library of Congress Data Network provides network connectivity for Library personnel at Washington, D.C., metropolitan area facilities, with the exception of personnel that rely on the OSEP Physical Security Network. OSEP Physical Security Network: The OSEP Physical Security Network is the technical infrastructure used for the systems that protect facilities, collections, assets, staff, and visitors. These systems include intrusion alarms, card readers for access control, closed- circuit video cameras, monitors, and recorders. ITS Application Hosting Environment: The ITS Application Hosting Environment is the technical infrastructure used to support service units’ business systems, with the exception of financial business systems and systems used by CRS and OSEP. ITS Library of Congress Office Automation System: The ITS Library of Congress Office Automation System is the technical infrastructure used to support file and print services, as well as office automation tools, for Library personnel, with the exception of CRS. Office of the Chief Financial Officer Momentum: Momentum is the Library’s central financial management system. The U.S. Capitol Police, Congressional Budget Office, Office of Compliance, and Open World Leadership Center also use this system to record and maintain their financial information. This system is hosted on the ITS Financial Hosting Environment. In addition, the Library’s service units have systems that support their various specific missions. For example: Copyright Electronic Copyright Office (eCO): Members of the public (e.g., authors and other copyright owners) use the eCO system to register basic claims to copyright. The Copyright Office also uses the system to manage the registration process. The ITS Application Hosting Environment hosts the eCO system. CRS Enterprise Infrastructure General Support System: The CRS Enterprise Infrastructure General Support System is the technical infrastructure (e.g., servers and network devices) used to support CRS applications (e.g., the system used to develop CRS reports), as well as file and print services and office automation tools (e.g., e-mail, word processing, and collaboration tools) for CRS personnel. Library Services System Management Information Network II (SYMIN II): Library Services uses SYMIN II to manage accounting transactions for the Federal Library and Information Network (FEDLINK) program. This system is hosted on the ITS Financial Hosting Environment, which is used to support financial systems. NLS Production Information & Control System/NLS Integrated Operations Support System (PICS/NIOSS): NLS uses PICS/NIOSS to manage the process of producing, distributing, and maintaining audiobooks (i.e., the electronic files used to present print information to a reader in audio format). This system is hosted on the ITS Application Hosting environment. Much of the responsibility for the Library’s IT rests with OSI. The office is headed by the Associate Librarian for Strategic Initiatives, who also serves as the Library’s CIO. The CIO’s responsibilities include coordination of key IT management areas, such as investment management, enterprise architecture, and information security. Within OSI, ITS has various responsibilities for supporting the Library’s IT infrastructure. These include supporting the service units by planning, designing, developing, and maintaining systems and the infrastructure supporting those systems. As of September 2014, the Library had at least 380 staff dedicated to various IT functions. Most of these (about 250) were in OSI, while the rest were distributed throughout the rest of the organization, with Library Services and CRS having the most IT staff among the other service units. In addition, the Library relies on contractors to fill certain skill gaps, where necessary. Table 1 shows the number of IT staff—excluding contractors—across the agency. The Library obligated at least $119 million for IT during fiscal year 2014. Of that, about $46 million was obligated for IT staff salaries, and the other $73 million was for non-pay obligations (e.g., goods and services). Although OSI accounts for most of the Library’s IT spending, other service units also make investments in IT that collectively represent a little less than half of the organization’s IT spending. Table 2 shows IT spending across the Library. Examples of major investments in IT at the Library include the following: Office of the Chief Financial Officer Momentum Upgrade and Migration: As previously mentioned, Momentum is the Library’s financial system. The Library is making additional investments in this system in order to move Momentum to a cloud-based environment. After this effort is completed, the Library plans to migrate the Architect of the Capital’s financial management system into the Library’s Momentum environment. OSO Facility and Asset Management Enterprise (FAME): FAME is an existing library business system used to perform facility management functions (e.g., asset, space, and facility management). The system relies on commercial, off-the-shelf software. The Library is investing in additional modules of the underlying software relating to the management of work orders, keys, reservations, event support, and customer service. OSI and Library Services Twitter Research Access: The Library plans to develop a pilot for making a collection of “tweets” (i.e., brief messages of up to 140 characters in length) from the online social networking service Twitter available for research access. The Library and Twitter signed an agreement that gave the Library, under specific terms and conditions, all public tweets that were made from 2006 through April 2010. The Library and Twitter also agreed that Twitter would provide all public tweets on an ongoing basis under the same terms. Control over the Library’s IT spending lies primarily with each of the individual service units—some of which have their own IT organizations and CIOs. For example: CRS: The Information Technology and Management Office, which is led by the CRS CIO, is responsible for managing the majority of the IT systems used by CRS staff. Copyright Office: The Copyright Office of the CIO, which is led by the Copyright CIO, is responsible for maintaining the Copyright Office IT systems. Library Services: The Automation Planning & Liaison Office within Library Services is responsible for procuring IT hardware, software, and services; managing IT assets; and coordinating with other Library IT organizations. These organizations are accountable to the heads of their respective service units. For example, the CIO for CRS reports to the Director of CRS, not to the Library of Congress CIO. As GAO and others have highlighted in several reports, the Library has faced long-standing challenges in effectively managing its IT. In 1996, we issued a report on a management review of the Library, covering six major issue areas, including its use of IT. Among other things, the review found that (1) the Library lacked a sufficient strategic focus on information resources management that was linked to its mission objectives; (2) its existing technology infrastructure was not integrated across the Library at a level appropriate to reduce interfaces between systems, lessen the need for maintenance resources, and minimize redundant data; (3) technology programs and projects were not managed as investments, with insufficient attention paid to program and project costs, priorities, and performance; and (4) the Library had not decided whether it should continue to build new systems in-house or whether it would be more cost-effective to acquire these capabilities elsewhere. The report recommended a number of actions the Library could take to improve its management of IT in these areas. In commenting on the report, the Library acknowledged the need to link information resources to its mission objectives and re-focus its infrastructure to reflect changes in the technology environment. The findings in this report were echoed in a review conducted by the National Academy of Sciences and in several reports from the Library’s Inspector General (IG). In 2000, the National Academy of Sciences released a report, commissioned by the Library, that examined the need for the Library to develop a digital strategy to cope with the fact that content was increasingly being produced in digital forms. The study found that all Library service units spent money on IT and that this spending was not fully coordinated across the Library. It was unable to quantify this spending because the Library had not established financial accounting for IT. The study concluded that “hadow systems and duplication are the inevitable outcome of such arrangements.” Additionally, the study found that strategic direction for IT must come from the office of the Librarian, but that the most senior members of that office—the Librarian of Congress, Deputy Librarian, and Chief of Staff— did not have any specific background or expertise in IT. Further, the report identified a number of findings relating to information security, referring to this issue as “y far the most serious infrastructure problem” at the Library. The study made a number of recommendations, including that the Library (1) establish a Library-wide committee tasked with, among other things, approving significant IT investments; (2) appoint a second Deputy Librarian in order to provide strategic direction for the Library’s IT; and (3) address its information security findings. In March 2009, the Library’s IG reported on the agency’s IT strategic planning efforts since the issuance of the National Academy report, including the extent to which the Library had implemented the report’s recommendations. The report noted that the Library had made many technology improvements, including migrating from mainframe systems, updating the storage architecture, building an alternate computing facility that provides backup for its data centers, building a secure financial hosting environment, and developing a National Institute of Standards and Technology-compliant certification and accreditation process.report further noted that the Library had standardized internal and external websites, developed digital collections containing more than 300 terabytes of data, and built a network of national and international digital partners. However, the IG also reported that the strategic planning process at the Library was not well integrated with essential planning components and not instituted Library-wide. Specifically, strategic planning for IT was not linked directly to the overall Library strategic plan and did not have a “forward-looking” view; strategic planning was not linked to the IT investment process; the organizational structure of the ITS directorate did not foster strategic planning and good IT governance; areas of overlap existed in support services and systems, including a number of service units that maintained their own technology offices and help desk functions; the Library was missing an enterprise architecture program, which should be coupled with a strategy for implementing future technology; and ITS customer service needed improvement, to include the use of service-level agreements. The IG stated that these findings were in large part the result of an unclear sense of how IT planning fits into the Library’s mission and the roles and responsibilities of its employees, as well as a lack of linkage between IT strategic planning processes and actual performance. The IG made a number of recommendations to address these weaknesses, and Library management agreed with the majority of the report’s findings and recommendations. The Library’s IG issued a follow-up report in December 2011, in which it found that the Library had made progress toward implementing the recommendations made in its prior report, but not as much as expected.Specifically, it reported that the Library needed to (1) develop an updated OSI strategic plan, (2) improve data for IT investments, (3) separate the IT function from OSI and establish an Office of the Chief Information Officer, (4) develop a structured procedure to continuously identify and prevent duplicative IT costs throughout the Library by consolidating IT services, (5) increase oversight of the Library’s enterprise architecture, and (6) strengthen customer service to the service units. Library management concurred with 17 of the 21 recommendations the IG made in its report. More recently the IG has reported on challenges relating to (1) procurement of IT workstations, (2) oversight of the National Library Catalog Project, and (3) certification and accreditation. In September 2012, the IG reported that a lack of inventory controls had resulted in unnecessary purchases and an aging IT inventory. Specifically, the IG found that the Library’s logistics directorate and ITS did not effectively coordinate with service units, which resulted in unnecessary purchases, such as 484 24-inch, flat-panel monitors that had sat undistributed at the Library’s warehouse since 2008, and 224 24-inch, flat-panel monitors that were purchased in 2010 but also sat in the warehouse undistributed. The IG recommended, among other things, that ITS improve its communications and transparency with service units. The Library concurred with this recommendation. In September 2013, the IG reported that the Library did not provide effective oversight of the National Library Catalog Project.Specifically, the IG found that the Library’s IT Steering Committee (ITSC)—the committee responsible for reviewing and analyzing IT investments—did not review Library Services’ now-terminated $2.2 million National Library Catalog project despite its meeting the cost criterion requiring oversight by the committee (i.e., 3-year costs exceeding $1 million). The IG stated that the ITSC did not review this investment because it was in development prior to the formation of the committee. The IG recommended that the ITSC review any other investments in development that met criteria requiring its oversight. The Library agreed with the recommendation. In October 2014, the IG reported that governance and management oversight of the Library’s certification and accreditation process needed to be strengthened. Specifically, the IG found that security assessments and remedial action plans were not always completed in a timely manner. The IG recommended, among other things, that the Library ensure that the security assessments and plans be completed in accordance with Library policy and establish an enforcement mechanism to ensure that remedial action plans are addressed. The Library concurred with these recommendations. Congress has also recognized the Library’s IT management challenges. For example, in its report accompanying the fiscal year 2012 legislative branch appropriations bill, the House Appropriations Committee directed the Librarian of Congress to consider managing within the Office of the Librarian all Library IT planning and resource allocations to ensure that IT requirements are properly prioritized and resources are effectively used. GAO has identified a set of essential and complementary management disciplines that provide a sound foundation for IT management. These include the following: Strategic planning: Strategic planning defines what an organization seeks to accomplish and identifies the strategies it will use to achieve desired results. A defined strategic planning process allows an agency to clearly articulate its strategic direction and to establish linkages among planning elements such as goals, objectives, and strategies. A well-defined IT strategic planning process helps ensure that an agency’s IT goals are aligned with its strategic goals. Also as part of their strategic planning efforts, organizations should develop an enterprise architecture, which is an important tool to help guide an organization toward achieving the goals and objectives in its IT strategic plan, and implement human capital management practices to sustain a workforce with the skills necessary to execute the organization’s strategic plan. Library policy also recognizes the importance of IT strategic planning, enterprise architecture, and sustaining a workforce that is aligned with the strategic plan. IT investment management: IT projects can significantly improve an organization’s performance, but they can also become costly, risky, and unproductive. Agencies can maximize the value of IT investments and minimize the risks of IT acquisitions by having an effective and efficient IT investment management and governance process, as described in GAO’s guide to effective IT investment management. Recognizing the importance of IT investment management, in 1996 Congress passed the Clinger-Cohen Act, which requires executive branch agencies to establish a process for selecting, managing, and evaluating IT investments in order to maximize the value and assess and manage the risks of IT acquisitions. Although not required to do so, the Library has embraced this requirement. System acquisition and development: Agencies should follow disciplined processes for developing or acquiring IT systems. These include requirements development, risk management, and cost estimating and scheduling, among others. Best practices in these areas have been identified by organizations such as Carnegie Mellon University’s Software Engineering Institute (SEI) and GAO. Information security and privacy: Federal agencies rely extensively on IT systems and electronic data to carry out their missions. Effective security for these systems and data is essential to prevent data tampering, disruptions in critical operations, fraud, and inappropriate disclosure of sensitive information, including personal information entrusted to the government by members of the American public. Recognizing the importance of information security and privacy, Congress enacted the Federal Information Security Management Act of 2002 (FISMA), which requires executive branch agencies to develop, document, and implement an agency-wide information security program. Additionally, in order to help agencies develop such a program, the National Institute of Standards and Technology (NIST) has developed guidance for information security and privacy. Although it is not subject to FISMA, the Library has embraced the law’s requirements as well as NIST guidance for information security and privacy. Service management: Agencies should develop and implement a process for ensuring that IT services are aligned with the business needs of an organization and actively support them. The Information Technology Infrastructure Library practices are a widely accepted approach to IT service management.ITS, the Library has adopted these practices for managing ITS’s services. According to the Director of IT leadership: Effective leadership, such as that of a CIO, can drive change, provide oversight, and ensure accountability for results. Congress has also recognized the importance of having a strong agency CIO. For example, as part of the Clinger-Cohen Act, Congress required executive branch agencies to establish the position of agency CIO. The act also gave these officials responsibility and accountability for IT investments, including IT acquisitions, monitoring the performance of IT programs, and advising the agency head whether to continue, modify, or terminate such programs. More recently, in December 2014, Congress passed federal information technology acquisition reform legislation (commonly referred to as FITARA), which strengthened the role that agency CIOs are to play in managing IT. For instance, the law required executive branch agencies to ensure that the CIO had a significant role in the decision process for IT budgeting, as well as the management, governance, and oversight processes related to IT. As previously mentioned, although not required to do so, the Library has established a CIO position, and has made this official responsible for, among other things, overseeing the Library’s enterprise architecture and IT investment management processes. Comprehensive strategic planning is essential for an organization to define what it seeks to accomplish, identify strategies to efficiently achieve the desired results, and effectively guide its efforts. Key elements of IT strategic planning include an IT strategic plan and an enterprise architecture that together outline the agency’s IT goals, measures, and timelines. In addition, effective human capital management is critical to sustaining an IT workforce with the necessary skills to execute a range of management functions that support the agency’s mission and goals. However, the Library has not completed an IT strategic plan. An IT strategic plan has been drafted, but it does not identify strategies for achieving defined goals and interdependencies among projects. Regarding enterprise architecture, the Library has developed an architecture intended to reflect the current state of its IT systems and operations, but, according to the official who served as acting CIO from April 2014 to January 2015, the architecture is not reliable. Further, the Library has not developed a target architecture that defines its desired state or a plan for achieving this state. Senior Library officials noted that the agency had not made IT strategic planning or enterprise architecture a priority. At the conclusion of our review in January 2015, the Library’s Chief of Staff stated that the agency plans to draft a new IT strategic plan within 90 days. Further, the Library has not performed an organization-wide assessment of IT skills or future needs. Instead, each service unit is responsible for undertaking this assessment on its own. Until it fully implements key elements of IT strategic planning, the Library cannot be assured that its IT investments will match its strategic direction and effectively position the agency to cope with future challenges. As we have previously reported, an IT strategic plan serves as an agency’s vision or road map and helps align its information resources with its business strategies and investment decisions. Key elements of an IT strategic plan include, among other things, (1) alignment with the agency’s overall strategic plan, (2) results-oriented goals and performance measures that permit it to determine whether it is succeeding, (3) strategies it will use to achieve desired results, and (4) descriptions of interdependencies within and across projects so that these can be understood and managed. Further, Library policy states that OSI has primary responsibility for setting the Library’s IT strategic direction. In 2010, the Library developed its most recent overall strategic plan for fiscal years 2011 through 2016. The plan included five strategic goals and strategies to achieve those goals, including strategies involving IT. For example, strategies for achieving the goal of managing proactively for demonstrable results included implementing an enterprise architecture program and improving IT governance and investment management processes. As another example, one strategy for achieving the goal of sustaining an effective national copyright system was to improve processes and IT infrastructure to ensure timeliness of copyright registration. However, the Library has not completed an IT strategic plan. The official who served as Deputy Librarian from June 2012 to December 2014 explained that, during his tenure he provided the Librarian with draft versions of agency-wide and IT-specific strategic plans that he had developed. The draft IT plan covered fiscal years 2015 to 2020 and addressed some, but not all, key IT strategic planning elements. Specifically, the plan included five goals: (1) use a shared services approach, (2) establish the most effective IT organization and governance, (3) apply outside consultation and guidance where applicable to meet library needs, (4) align Library staff skills with its IT needs, and (5) ensure high levels of information security and preservation. However, the draft IT plan did not identify what strategies the Library would use to achieve these goals and related performance measures. Additionally, the plan did not describe interdependencies between projects, which would help further define the relationships between projects and shared services. The former Deputy Librarian explained that the IT strategic plan would be followed by an IT support plan, which would include initiatives, projects, milestones, and timelines for implementing the IT strategic plan. Further, the date for completing the Library’s IT strategic plan slipped twice. Specifically, during a hearing on the Library’s fiscal year 2015 budget in March 2014, the former Deputy Librarian first committed to delivering the IT strategic plan by the end of August 2014. Subsequently, that date slipped to January 2015, and then was delayed again to September 2015. Moreover, we were told by the Librarian in December 2014 that the draft IT plan was merely a starting point for the Library’s IT strategic planning efforts and is not the agency’s official draft. In January 2015, at the conclusion of our review, the Chief of Staff stated that the Library plans to draft a new IT strategic plan within 90 days. The Librarian stated that the Library intends to finalize the plan by September 2015. If the Library finalizes an IT strategic plan that sets forth a long-term vision and the intermediate steps that are needed to guide the agency, it will be better positioned to effectively prioritize investments and use the best mix of limited resources to move toward its longer-term, agency-wide goals. Like an IT strategic plan, an enterprise architecture is an important tool to help guide an organization’s IT investments by ensuring that the planning and implementation of those investments take full account of the business and technology environment in which the systems are to operate. According to our research, a well-defined enterprise architecture thoroughly describes the current and target states of an organization’s IT systems and business operations and identifies the gaps and specific intermediate steps it plans to take to achieve the target state. Additionally, in order to enable institutional commitment to an enterprise architecture, agencies should, among other things, develop an organizational policy for enterprise architecture and establish an executive committee representing the enterprise that is responsible and accountable for enterprise architecture. To its credit, the Library has established a policy and executive committee for enterprise architecture. This policy describes roles and responsibilities for developing, maintaining, and using the enterprise architecture. For example, the agency’s chief architect is to report to the CIO and is responsible for, among other things, coordinating and overseeing business and IT planning and advising key stakeholders in business and IT planning. Additionally, the policy makes the Library’s Executive Committee responsible for ensuring that the architect assumes responsibility for the Library’s enterprise architecture. However, the Library of Congress has not fully developed its enterprise architecture. The agency has an enterprise architect who developed an architecture that describes the current state of the Library’s IT systems and operations, to include performance, business, data, services, and technology. However, management has raised concerns about the architecture’s reliability. For example, according to the former acting CIO, data for the architecture were not gathered from management and validated stakeholders (i.e., individuals identified by their respective service unit as being knowledgeable about the current and target states of the unit’s IT systems and business operations). Instead, the enterprise architect gathered information for the architecture by interviewing over 500 employees across the Library. Additionally, the architecture does not reflect the target state of the Library’s IT systems and business operations, or the gaps and specific steps that the Library should take to achieve the target state. The lack of progress in developing the enterprise architecture was enabled, in large part, by limited oversight from the Library’s CIO. According to the former acting CIO, developing the Library’s enterprise architecture was not a priority for the previous CIOs. She also told us that the previous CIOs did not effectively oversee the enterprise architect. In the absence of appropriate oversight, according to the acting CIO, the enterprise architect has taken an isolated, self-directed approach to developing the architecture, which has not met the organization’s needs. The Library has taken initial steps toward improving its architecture. According to the former acting CIO, the three individuals who have recently served as acting CIO on a rotating basis collectively decided to improve the management of the enterprise architect. That official also stated that, in order to improve the reliability of the data collected by the enterprise architect, that individual is now required to collect data from stakeholders in each service unit who have been identified by the ITSC. Additionally, at the conclusion of our review, the former acting CIO stated that the enterprise architect has been detailed to work under the direction of the Deputy Director of ITS until April 2015 so that his work can be integrated with other architecture work in ITS. Further, she stated that an independent, expert reviewer will assess the enterprise architect’s work and determine how the Library can move its architecture to the next level of maturity. That official also stated that strategic direction for the enterprise architecture program will be integrated with the Library’s IT strategic plan. Until the Library establishes and implements an approach to developing a well-defined enterprise architecture—to include providing adequate oversight of the work performed by the enterprise architect—there is increased risk that organizational operations and supporting technology infrastructures and systems will be duplicative, poorly integrated, unnecessarily costly to maintain, and unable to respond quickly to shifting environmental factors. Key to an agency’s success in managing its IT systems is sustaining a workforce with the necessary knowledge, skills, and abilities to execute a range of management functions that support the agency’s mission and goals.capital management, which includes assessing current and future agency skill needs by, for example, analyzing the gaps between current skills and future needs, and developing strategies for filling the gaps. Taking such Achieving such a workforce depends on having effective human steps is consistent with activities outlined in human capital management models that we and the Office of Personnel Management have developed. Although its human capital plan calls for the organization to assess gaps in current and anticipated skills across all employees within the Library,such an assessment has not been performed for IT skills. Additionally, although identifying skills and competencies that are clearly linked to an agency’s mission and longer-term IT goals is essential—especially in an organization like the Library, which has IT staff in every service unit—the Library’s IT human capital plan does not provide information about future IT human resource needs. The former acting CIO acknowledged that the Library has not performed an organization-wide assessment of skills or future needs. Instead, according to the acting CIO, each service unit is responsible for managing its own human capital skills. For example, that official told us that, with respect to OSI, skills and competencies are identified when an individual leaves the organization, or when OSI plans to hire additional staff. However, this approach does not provide the CIO with visibility into the service units’ IT human capital efforts. We have previously reported that CIOs at executive branch agencies without sufficient influence over the hiring of IT staff were limited in their ability to ensure appropriate IT staff were being hired to meet mission needs. The Library has taken initial steps to assess the needs of its IT workforce. According to the Director of Human Resources Services, the Library’s Human Capital Planning Board conducted a pilot initiative in the Acquisitions and Bibliographic Access Directorate within Library Services to identify competencies and skills, including those relating to IT. According to the Director of Human Resources Services, the Library plans to identify skills and competencies, including those relating to IT, to be used initially in order to assess the skills for three succession planning groups: senior-level executives, managers/supervisors, and succession target occupations. This official stated that this effort is expected to extend into fiscal year 2016, and that the Library plans to institutionalize this approach in other Library offices. However, the Library has yet to establish a date for completing the effort. Until the Library ensures that its human capital planning and analysis address the specific competencies and skills critical to meeting its future IT needs, the agency jeopardizes its ability to deliver IT support. Additionally, without an organization-wide approach to assessing needed IT skills, the Library is at risk of developing a workforce in each service unit with overlapping competencies. Ensuring that investments in IT meet the needs of the organization and are being effectively managed is important for any federal agency. Congress has recognized the importance of effective IT investment management by requiring agencies in the executive branch to establish an investment management process. Although not required by law to do so, the Library has also begun to establish such a process. Specifically, the Library’s Information Resource Management Policy and Responsibilities calls for the Library to align IT investments with its strategic goals and to connect strategic planning, enterprise architecture, and IT investment management in order to design and leverage Library resources to meet the needs of Congress and the public. Since 2010, the Library has taken steps to build a foundation for managing its IT investments, including instituting an investment board and establishing elements of a process for selecting investments. However, the Library has not implemented an IT investment management process that fully addresses key practices. In particular, its investment board has not always operated as intended. Further, the Library’s process for selecting IT investments is not aligned with decisions to fund investments, and no process has been established for reselecting ongoing investments. Moreover, the Library, including its service units, did not always follow its own policy for including major investments in its agency-wide investment review process. Regarding investment oversight, the Library established a process for overseeing the performance of selected investments, but the data informing this process were not always complete. Moreover, the Library does not have a comprehensive process for tracking its IT spending and does not have an accurate inventory of its IT assets. Consequently, the Library does not know how much it spends annually on IT or what kinds of equipment it is currently using. Finally, the Library is not managing its IT as a portfolio to determine that capabilities, once implemented, are delivering intended value and that the agency is identifying the appropriate mix of IT projects that best meet its mission needs. These weaknesses can be attributed, in part, to unclear or incomplete policies as well as inconsistent implementation of the policies that have been developed. Until the Library addresses these weaknesses, it will not have the investment structure and processes needed to effectively manage its IT projects, systems, and assets. GAO’s IT investment management framework is composed of five progressive stages of maturity that mark an agency’s level of sophistication with regard to its IT investment management capabilities.Such capabilities are essential to the governance of an organization’s IT investments. At the Stage 2 level of maturity, an organization lays the foundation for sound IT investment processes that help it attain successful, predictable, and repeatable investment control processes at the project level. These processes focus on the agency’s ability to select, oversee, and review IT projects. According to the framework, Stage 2 critical processes include the following: Instituting the investment board: As part of this process, an agency is to establish an enterprise-wide investment review board to be responsible for defining and implementing the IT investment management governance process. Selecting investments that meet business needs: As part of this process, an agency is to establish and implement policies and procedures for selecting and reselecting IT investments that meet the agency’s needs. Providing investment oversight: This process includes establishing and implementing policies and procedures for overseeing IT projects and ensuring that they align with the agency’s business needs. Capturing investment information: This process includes establishing and implementing policies and procedures for developing and maintaining a comprehensive repository of information on IT investments and assets. The establishment of decision-making bodies or boards is a foundational component of effective IT investment management. According to the IT investment management framework developed by GAO, an organization should, among other things, establish an enterprise-wide investment review board to be responsible for defining and implementing IT investment governance policies and procedures. In order for the IT investment management process to function effectively, an investment board must operate within its assigned authority and responsibility so that investments are properly aligned with the organization’s objectives and are reviewed by those with the authority to make IT management decisions. Additionally, the organization’s IT investment management process should describe how these processes are coordinated with other organizational plans, processes, and documents, including, at a minimum, the IT strategic plan and enterprise architecture. The Library established an investment board that is responsible for defining and implementing IT investment governance policies and procedures. Specifically, the Library’s policy on information resource management established the ITSC, an investment board made up of senior officials from across the Library’s various service units. The policy requires the board to review major investments in IT that meet at least one of several agency-defined criteria. These include those investments that are high risk, have high visibility (internally or externally), or have estimated 3-year costs exceeding $1 million. The Library’s information resource management policy also gives the ITSC responsibility for formalizing the policies and procedures for selecting and managing IT investments. Library policy also describes IT management responsibilities of the Executive Committee—the Library’s most senior governance board. For example, this committee is to provide strategic mission and priority guidance to the ITSC. However, the Library has not clearly defined the division of responsibilities between the two bodies. Specifically, although Library policy gives IT investment selection decision-making authority to both the ITSC and the Executive Committee, it does not clearly specify when the ITSC should make a decision and when circumstances require an Executive Committee decision. Since the establishment of the ITSC in 2010, the Executive Committee has not made any decisions regarding the selection of IT investments; instead, the ITSC has made all such decisions. In March 2014, the ITSC developed a process for determining when investments are to be reviewed by the Executive Committee; however, this process has not yet resulted in any decisions being escalated to the Executive Committee. Moreover, the Director of ITS, who also chaired the ITSC from July 2013 to January 2015, stated that this process had not been approved by the Librarian or the Executive Committee. According to this individual, he plans to submit this revision to the Office of the Librarian as part of a Library-wide effort to streamline and centralize Library policies. Additionally, the Library’s investment management process is not fully coordinated with its IT strategic plan and enterprise architecture. Specifically, as previously mentioned, the Library does not have an IT strategic plan or a complete enterprise architecture to guide its IT investment decisions and ensure that those decisions meet the organization’s business needs. Coordination between the Library’s investment management process and its efforts to improve its strategic plan and enterprise architecture could help ensure that investments support the Library’s strategic goals and do not duplicate existing investments. Until Library policy is updated to clearly define the roles and responsibilities of the ITSC and Executive Committee and these bodies operate according to their designated authority and responsibilities, the Library cannot ensure that investments are properly aligned with the business needs of the entire organization. In addition, without a strategic plan, enterprise architecture, and a process for linking these areas to the investment management process, the ITSC and Executive Committee will not have a roadmap needed to make investment decisions that best meet the needs of the Library. According to our IT investment management framework, to support well- informed decisions, organizations should establish and implement policies and procedures for selecting and reselecting IT investments that meet the agency’s needs, and these policies should integrate funding and selection decisions. Documenting and implementing these processes are basic steps toward realizing increased maturity in how the organization selects its IT projects. To its credit, the Library developed policies and procedures that outline how IT investments are to move through the selection process, from initial proposal to final approval, with steps for evaluating and prioritizing the investments based on their alignment with business needs. As previously mentioned, Library policy requires the ITSC to review investments that are high risk, have high visibility, or have estimated 3-year costs exceeding $1 million. Before an investment is selected, the ITSC is to assign it a score based on quantifying its risk factors (e.g., high cost, length of development cycle, lack of clear and measurable objectives) and then evaluating those factors along with the significance of its program benefits (e.g., how it will contribute to organizational performance or how it will respond to user needs). The ITSC is then to use the score to determine whether it will select the investment for project development. However, the Library has not developed policies or procedures for reselecting investments that are already operational for continued funding. This is important because, according to the former acting ITSC chair, operational investments account for the majority of the Library’s IT spending. That same official also stated that the Library decided not to review, as part of its investment management process, investments that were either in development or already operational prior to the establishment of the ITSC in February 2010. For instance, in September 2013, the Library’s IG reported that the ITSC did not review Library Services’ now-terminated $2.2 million National Library Catalog project because it was in development prior to the ITSC’s formation. In October 2013, the former acting ITSC chair directed the members to bring before the committee any projects that (1) met the Library’s definition of major investments in IT that are to be reviewed by the ITSC, (2) were still in development, and (3) were in development prior to the establishment of the ITSC in February 2010. While this was a positive step, this decision did not address investments that were operational. Additionally, because the decision was made 3 years after the creation of the ITSC, there were likely some IT investments that were developed and completed during this time that did not receive ITSC review. According to the former acting ITSC chair, the Library will consider whether to review additional investments in the future. In addition, the Library does not have policies and procedures for integrating funding and selection decisions. In fact, according to the former acting ITSC chair, ITSC selection does not affect decisions to allocate funding for investments, because, in some instances, the service units secure funding for their investments before the selection process begins. The former acting ITSC chair added that, as a compensating control, the ITSC could request that ITS not devote its own resources to an investment until the committee’s concerns are resolved. However, this process would not affect the investment if the service unit proposing it decides not to use ITS resources. Until the Library fully integrates IT investment selection with funding decisions, selection decisions may not reflect an organization-wide perspective on what IT investments may best meet the Library’s needs. Further, the Library, including its service units, did not always follow its process for selecting new investments. Specifically, the ITSC does not review all major IT investments that, according to its policy, should be reviewed. For example, as discussed in more detail in our report on the Copyright Office, the office did not present four of its recent IT initiatives to the ITSC, despite each having estimated 1-year costs exceeding $1 million. In addition, the ITSC did not review the Twitter Research Access investment while it was in the planning stages because it was approved by the Executive Committee without going through the Library’s selection process. However, by not going through the Library’s selection process, investments are not subject to the selection reviews, which evaluate, among other things, cost, schedule, scope, strategic impact, and customer needs on an enterprise-wide level. For example, as discussed in more detail later in the report, the Twitter Research Access investment did not create a reliable cost estimate nor did it develop a schedule. Going through the selection process would help ensure that investments are better planned and better aligned with agency needs from the outset, which could eliminate or reduce the severity of future problems during system development. Until it establishes and implements a complete selection process for all major IT investments that links its IT investment selection decisions with agency funding decisions, the Library is at greater risk of not selecting the appropriate mix of IT investments that would best meet its organizational and technological needs as well as support its priorities for improvement. As with investment selection, organizations should have a documented, well-defined process for overseeing investments once they have been selected to ensure that they continue to align with the agency’s business needs. Effective investment oversight and evaluation involves, among other things, developing policies and procedures for reviewing the progress ongoing projects have made in meeting cost, schedule, and risk expectations. The ITSC established procedures to assess the progress of investments in development. These reviews center on quarterly reports submitted to the ITSC by the investment teams, which update the board on, among other things, cost and schedule variances, as well as how the teams are managing key risks. However, for the three selected investments that we reviewed,schedule, and risk data in the quarterly performance reports were not always complete or reliable. Regarding cost information, one investment—Momentum Upgrade and Migration—included initial and current cost estimates and the variances between the two figures in its July 2014 quarterly report. However, the other two investments—FAME and Twitter Research Access—did not provide all cost information in their July 2014 submissions. For the FAME investment, although OSO provided cost information for the investment as a whole, it listed the costs needed to achieve its next key milestone in September 2014 as $0. Finally, the Twitter Research Access investment did not provide any cost information in its quarterly report submitted in July 2014. In a written response to our findings, the Library acknowledged that this information was omitted for the Twitter Research Access investment, and stated that this information was included in the subsequent report. Moreover, the cost information provided in these reports is not fully reliable. Specifically, as discussed in more detail later in the report, the initial cost estimates for all three investments were not comprehensive. With respect to schedule information, two of the three investments— Momentum Upgrade and Migration and Twitter Research Access— submitted all schedule information as part of their quarterly performance reports submitted in July 2014. Regarding FAME, although the Library identified the planned start and completion dates for the investment, the investment did not provide any meaningful information regarding its next quarter. Rather, the relevant section of the quarterly performance report simply stated “9/30/2014,” which was the last day of the relevant quarter. Moreover, the schedule information provided in the reports was not fully reliable. Specifically, as explained in more detail later in this report, one investment— Twitter Research Access—did not develop a project schedule, and the other investments’ schedules were not well-constructed. Regarding risk, as discussed in more detail later in this report, the three investments did not always document the context and consequences of occurrence for all risks, and one of the investments—FAME—did not describe mitigation plans for all risks. Library officials recognized the need to make improvements to these data and recently revised the quarterly performance report template in order to help facilitate improvements. Until its oversight processes are informed by complete and accurate investment information, the Library cannot ensure that its investments are meeting expectations related to cost, schedule, and risk. Without this information, the Library may not be able to see the early warning signs that indicate the need for corrective action, resulting in failed investments or investments that do not adequately support business processes, meet user needs, or provide a successful return on investment. To make informed decisions regarding IT investments, an organization must be able to acquire, store, and retrieve pertinent information about each investment to be used in future investment management decisions. As we have reported, for this critical area, the organization should establish and implement a process for maintaining a full and accurate accounting of IT-related expenditures. In addition, the organization should track information on the organization’s IT assets, including, for instance, the physical location and owner of each resource. According to the GAO IT investment management framework, effectively capturing investment information requires using a standard, documented procedure for developing and maintaining IT data that are not only useful for decision-making, but are also timely, sufficient, complete, and comparable. The Library has not fully established and implemented a process for maintaining a full accounting of IT-related expenditures. Instead, it only collects information on the investments reviewed by the ITSC, which includes investment charters, cost-benefit analyses, and performance reports. on IT. Consequently, the Library does not know how much it spends In the absence of this information, we estimated that the Library obligated at least $119 million on IT for fiscal year 2014. We based this estimate on data from the Library’s accounting and human resources systems. This allowed us to identify spending on IT equipment and services as well as salary information for staff performing IT-related functions. However, as discussed in more detail in appendix I, this $119 million does not reflect all of the Library’s IT spending. At the conclusion of our review in December 2014, the Library’s Chief Financial Officer told us that the Library has required that service units indicate, for planned fiscal year 2015 expenditures, whether the expenditures relate to IT. A senior advisor to the Chief Financial Officer estimated that the Library would be able to provide a reliable IT spending figure by March 2015. However, the Library has not established guidance to assist service units in classifying planned IT expenditures. The cost estimates for the investments reviewed by the ITSC in fiscal year 2014 collectively totaled approximately $12.5 million, which is a small percentage of the Library’s overall IT spending. With regard to capturing IT asset information, the Library’s primary asset inventory system is highly inaccurate. Integrated Support Services, a division of OSO, has developed a system to track and manage the Library’s assets, including those assets related to IT. However, many of these IT assets are no longer in use. For example, the system lists over 18,000 “active” personal computers, even though, according to Library officials, it actually has fewer than 6,500 personal computers in use. The list of “active” personal computers includes over 12,000 computers from the manufacturer from which the Library primarily purchases computers and more than 5,000 computers from four other manufacturers. Without an accurate inventory, there is increased risk of undetected theft and loss. In a written response, the Library acknowledged that its primary inventory system does not have reliable information on IT hardware. The Library cited multiple reasons for this weakness, including that its primary inventory system contains legacy data from an obsolete, decommissioned system. Additionally, the Library stated that a comprehensive inventory of non-capitalized assets has not been conducted in several years. Further, the Library said that its current policy on inventory management does not require non-capitalized assets to be identified in its primary inventory system. The Library also noted that ITS and CRS maintain other systems with accurate and reliable information about the majority of hardware connected to the Library’s network. It added that the items in the ITS and CRS systems make up the vast majority of the IT inventory in the Library.most Library IT hardware currently in use, they do not have information on the hardware in the primary inventory system that is no longer in use. Without this information, the Library cannot provide the disposition of the hardware that is no longer being used. Additionally, maintaining this information in separate systems can increase the risk of unnecessary purchases of items already on hand, which, as discussed later in this However, while these systems may have reliable information on report, has occurred at the Library. Further, the ITS and CRS systems do not include information on IT not connected to the Library’s network. At the conclusion of our review, the Library outlined steps it plans to take to address the reliability of its primary inventory. These include revising its policy in early 2015 to require key non-capitalized IT hardware assets to be identified in the Library’s primary inventory system and populating the Library’s primary inventory system with data from the ITS and CRS systems. The Library added that, because it has not yet developed a process for this, it will not have an accurate inventory in its primary system until March 2016. Without fully developing and implementing a process for maintaining a full accounting of IT-related expenditures, the Library will not have the information needed to make informed decisions. Further, until it ensures that its primary inventory system has accurate information on IT hardware, there is increased risk of unnecessary purchases of items already on hand. Once an agency attains Stage 2 maturity, it needs to implement critical processes for managing its investments as a portfolio to move on to Stage 3. An IT investment portfolio is the combination of an organization’s IT assets, resources, and investments (including those in production). Taking an agency-wide perspective enables an organization to consider new proposals along with previously funded investments, identifying the appropriate mix of IT projects that best meet mission needs, organizational needs, technology needs, and priorities for improvement. According to GAO’s IT investment management framework, Stage 3 critical processes include, among others, (1) conducting post- implementation reviews to compare actual investment results with decision makers’ expectations for cost, schedule, performance, and mission improvement outcomes; (2) defining the portfolio criteria; (3) creating the portfolio; and (4) evaluating the portfolio. Although the Library has established procedures for conducting post- implementation reviews, it has yet to apply these procedures to all operational investments. Specifically, in August 2014 and September 2014, the ITSC developed templates and instructions for conducting the reviews. At the conclusion of our review in March 2015, the interim CIO stated that the ITSC performed the first four post-implementation reviews. Although this is a positive step, the Library has yet to perform these reviews on all of its operational investments. Until such reviews are consistently performed on all operational investments, the Library will not be able to learn from all past investments and evaluate the effectiveness of its investment management process. With respect to defining portfolio criteria and creating and evaluating the portfolio, according to the former acting ITSC chair, the agency has not concentrated on implementing these Stage 3 key practices because it has focused its resources on establishing the Stage 2 practices associated with building the IT investment management foundation at the level of the individual investment. Full implementation of the Stage 3 critical processes associated with portfolio management will provide the Library with the capability to determine whether it is selecting the mix of products that best meet the agency’s business needs. The Software Engineering Institute (SEI) at Carnegie Mellon University, GAO, and others have developed and identified best practices to help guide organizations to effectively plan and manage their acquisitions of major IT systems. Our prior reviews have shown that proper implementation of such practices can significantly increase the likelihood of delivering promised system capabilities on time and within budget. These practices include, among others, risk management, requirements development, cost estimating, and scheduling. However, the Library has not developed policies in these areas that address key practices. Partly because the Library does not have organization-wide policies in these areas, the selected investments that we examined did not fully implement key practices for risk management, requirements development, cost estimating, and scheduling. Until the Library establishes and implements these practices, there is increased risk that its investments will incur cost overruns and schedule slippages and fail to deliver capabilities needed to meet the mission of the Library. Risk management is a process for anticipating problems and taking appropriate steps to mitigate risks and minimize their impact on program commitments. According to leading industry guidance,following elements: developing a risk management strategy; identifying and documenting risks; evaluating, categorizing, and prioritizing risks; developing risk mitigation plans; and monitoring the status of each risk periodically and implementing the risk mitigation plans as appropriate. Organizations should establish a risk management policy that calls for these elements to be addressed by individual investments. The Library of Congress has not established an organization-wide policy for IT risk management. Instead, only one directorate within a service unit—OSI’s ITS—has developed guidance in this area. This guidance includes templates for a risk management strategy and a risk register. The risk register provides a mechanism for investments to identify, document, evaluate, categorize, and prioritize risks; develop risk mitigation plans; and monitor the status of risks and implement risk mitigation plans as needed. However, this guidance is not mandatory for any projects in the Library—including those that are managed by ITS. The risk management strategy addresses the specific actions and management approach used to apply and control the risk management program. It also includes identifying and involving relevant stakeholders in the risk management process. did not establish a strategy for the FAME and Twitter Research Access investments. Identify and document risks to include the context, conditions, and consequences of risk occurrence: Although each of the three selected investments identified risks, they did not always document the context and consequences of risk occurrence. For example, the FAME investment charter identified four risks—business requirements and process, technical support resources, project funding, and contract.these risks that would provide additional context and consequences of risk decisions. As another example, the Twitter Research Access investment charter included a risk regarding limited institutional support and prioritization, but did not provide the context needed for management to fully understand this risk. As a final example, the Momentum Upgrade and Migration acquisition plan included a risk that was described as “resources.” Although the description of the risk mitigation plan provides some context for this risk, the acquisition plan does not describe the consequences of this risk’s occurrence. However, the charter did not include any descriptions of Evaluate, categorize, and prioritize risks using defined risk categories and parameters: The three selected investments did not always evaluate, categorize, and prioritize risks using defined risk categories and parameters. Specifically, for the Twitter Research Access and FAME investments, the Library evaluated, categorized, and prioritized all of the identified risks; however, it did not do so using defined risk categories. With respect to the Momentum Upgrade and Migration investment, the Library evaluated, categorized, and prioritized risks identified in the risk register, but did not do so for additional risks identified in its acquisition plan. Develop risk mitigation plans in accordance with the risk management strategy: The Library of Congress did not develop risk mitigation plans for all risks identified for the selected investments, and plans that were developed were not done so in accordance with risk management strategies. Specifically, for the Twitter Research Access and Momentum Upgrade and Migration investments, although the Library developed risk mitigation plans for all identified risks, it did not develop them in accordance with risk management strategies for those investments. With regard to the FAME investment, the Library did not develop risk mitigation plans for any of the identified risks. Monitor the status of each risk periodically and implement the risk mitigation plans as appropriate: Although the Library’s IT investment management process requires quarterly reports on, among other things, risks identified in the investments’ charters, the reports for the three investments did not fully address this information. Specifically, for the Twitter Research Access investment, although the July 2014 quarterly report included the identified risks and their associated mitigation plans, it did not fully document the context of risk occurrence. With respect to the Momentum Upgrade and Migration and FAME investments, the risk sections of the July 2014 quarterly reports did not identify any risks or risk mitigations plans. The incomplete implementation of risk management can be attributed to the lack of an organization-wide policy, as noted previously. At the conclusion of our review, the Library acknowledged that it has not established an organization-wide policy for risk management and stated that it would establish a policy that requires all IT acquisitions valued at $100,000 or more to follow a Library-wide risk management policy and process. The Library stated that this new risk management policy and process will be established for fiscal year 2016. Until the Library establishes and implements organization-wide risk management policies and procedures, officials will not have assurance that risks facing IT investments are being adequately addressed. Requirements establish what the system is to do, how it is to do it, and how it is to interact with other systems. According to leading practices, effective requirements development includes eliciting stakeholder needs, developing customer requirements, and prioritizing customer requirements. In order to enable consistent implementation, processes for requirements development should be established in organizational policy. The Library of Congress has not established an organization-wide policy for requirements development. Although ITS has developed requirements management guidance that ITS investments are required to follow, this guidance does not apply to other service units’ investments. This guidance includes a template for documenting requirements, including those developed by customers. Additionally, while the Library implemented key requirements development practices for two of the three selected investments, it did not consistently do so for the third. For two of the investments—Twitter Research Access and Momentum Upgrade and Migration—the Library elicited stakeholder needs and developed prioritized customer requirements. For example, for the Twitter Research Access investment, Library Services convened a group of stakeholders, referred to as the Twitter Access Group, to develop functional requirements to support research access to the Twitter archive. Based on these efforts, that group developed customer functional requirements and prioritized them by placing them into three priority categories. However, the FAME investment did not fully implement key requirements development practices. Specifically, the Library elicited customer needs and developed customer requirements for only one of the three components of the FAME investment and did not prioritize them. In a written response, Integrated Support Services acknowledged that it had not elicited customer needs for all three components of the FAME investment, stating that it would do so as the components are implemented. It explained that this is appropriate because FAME uses commercial, off-the-shelf software with “out-of-the-box” functionality. However, according to leading practices, while requirements will evolve as more is learned about the selected product, some stakeholder needs should be elicited and developed prior to the selection of a commercial, off-the-shelf solution to ensure that the solution meets those needs. The incomplete implementation of requirements development practices can also be attributed to the lack of an organization-wide policy, as discussed previously. At the conclusion of our review, the Library acknowledged that it has not established an organization-wide policy for requirements development and stated that it would do so, consistent with the industry guidance cited in this report. The former acting CIO stated that the Library intends to finalize this policy by September 2015. Until the Library establishes and implements a consistent requirements development process across the organization, it will not have assurance that its IT investments will meet stakeholder and customer needs. Reliable cost estimates are critical for successfully delivering IT investments. Such estimates provide the basis for informed investment decision making, realistic budget formulation, meaningful progress measurement, and accountability for results. GAO’s Cost Estimating and Assessment Guide defines 12 leading practices related to four characteristics—comprehensive, well-documented, accurate, and credible—that are important to developing high-quality, reliable estimates. To institutionalize cost estimating best practices, organizations should establish policies that require cost estimates to demonstrate these four characteristics. The Library of Congress has not established an organization-wide policy for cost estimating. Instead, only one directorate within a service unit— ITS—has developed guidance in this area that ITS projects are largely required to follow. Moreover, ITS’s cost estimating guidance does not substantially address the leading practices relating to developing high- quality, reliable estimates. Specifically, the guidance partially addresses the well-documented characteristic and minimally addresses the comprehensive, accurate, and credible characteristics. Table 3 shows the extent to which the guidance addressed the four characteristics. The weaknesses in the Library’s cost estimating policies and guidance are reflected in the estimates for the selected investments. To its credit, the Library developed cost estimates for all three selected investments. However, none of the estimates fully met the comprehensive characteristic, which is necessary for the estimate to fully address the other three characteristics. Specifically, one of the three selected investments’ estimates—the estimate for the Momentum Upgrade and Migration investment—partially met the comprehensive characteristic, and the other two estimates minimally met the comprehensive characteristic. Regarding Momentum Upgrade and Migration, to its credit, the Library developed documentation that defined several components of the investment, including its assumptions about the investment, key risks, and a schedule. However, the cost documentation does not include enough detail to ensure that all costs are included. For example, costs for government staff are not clearly described in the documentation. Additionally, the estimating documentation was not always structured in sufficient detail to ensure that costs are neither omitted nor double counted. Regarding the estimates for the Twitter Research Access and FAME investments, although the estimates included some costs, they did not include enough detail to confirm that all costs were included. For example, neither estimate included a work breakdown structure—which is the cornerstone of every project because it defines in detail the work necessary to accomplish a project’s objectives. Additionally, the estimates did not include all cost-influencing assumptions. At the conclusion of our review, the Library acknowledged that it had not established an organization-wide policy for cost estimating and stated that it would establish, by December 2015, cost estimating guidance for all IT projects with an initial investment over $1 million. Until the Library establishes and implements an effective cost estimating process, there is increased risk that cost estimates may not be reliable—thereby impairing its ability to make well-informed funding decisions and affecting how it allocates resources across competing investments. The success of an IT investment depends in part on having an integrated and reliable master schedule that defines when the investment’s set of work activities and milestone events are to occur, how long they will take, and how they are related to one another. Among other things, a reliable schedule provides a road map for systemic execution of an IT investment and the means by which to gauge progress, identify and address potential problems, and promote accountability. GAO’s Schedule Assessment Guide defines 10 leading practices related to four characteristics— comprehensive, well-constructed, credible, and controlled—that are vital to having an integrated and reliable master schedules. To institutionalize sound scheduling practices, organizations should establish policies that require schedules to demonstrate these four characteristics. The Library of Congress has not established an organization-wide policy for scheduling. As with cost estimating, ITS has developed some guidance in this area that ITS projects are largely required to follow. Additionally, the guidance does not substantially address the leading practices related to developing integrated and reliable master schedules. Specifically, ITS’s guidance minimally addresses the credible and controlled characteristics and does not address the comprehensive and well-constructed characteristics. Table 4 shows the extent to which ITS’s scheduling guidance addresses the four characteristics. The weaknesses in the Library’s scheduling policies and guidance are reflected in the schedules for the three selected investments. One investment—Twitter Research Access—did not develop a schedule, and the other investments’ schedules did not substantially address the well-constructed characteristic, which relates to the foundational practices for a high-quality, reliable schedule. The FAME investment developed two schedules—each of which relates to one of the investment’s projects— that, considered together, partially addressed the well-constructed characteristic. Specifically, the activities in each schedule were largely sequenced with straightforward logic. However, the schedules did not include valid critical paths, and their float values do not always accurately represent schedule flexibility. At the conclusion of our review in January 2015, the Director of Integrated Support Services stated that the schedules we reviewed were immature because they were created when project management activities were just beginning. He also provided two updated schedules, stating that they were more robust and addressed the weaknesses we identified. We reviewed one of the two schedules and found that the schedule partially addressed the well-constructed characteristic. Similar to the previous schedules, it did not include a valid critical path and the float values did not always accurately represent schedule flexibility. In contrast to the other schedules, however, the updated schedule was not sequenced with straightforward logic. For example, 25 (20.2 percent) of the remaining 119 activities did not have successor activities. Not linking related activities can cause problems because changes to the durations of these activities will not accurately change the dates for related activities. Additionally, 40 (32.3 percent) of the remaining activities were constrained by “finish no earlier than” dates, which is significant because it means that these activities would not be allowed to finish earlier, even if their respective predecessor activities have been completed. The Momentum Upgrade and Migration investment developed a schedule that minimally addressed the well-constructed characteristic. For example, 400 (33.9 percent) of the remaining 1,129 activities did not have successor activities. Additionally, the schedule did not have a valid critical path, and its float values did not always accurately represent schedule flexibility. At the conclusion of our review, the Library acknowledged that it had not established an organization-wide policy for scheduling and stated that it will develop a schedule management process based on GAO’s and other best practices and will establish a Library-wide policy that requires all investments to follow the new schedule management process. The Library stated that the new policy and process will be established for fiscal year 2016. However, the Library has yet to establish a date for completing the effort. Until the Library establishes and implements a process for effectively managing its schedules, there is increased risk of schedule slippages and cost overruns. Additionally, it will be difficult for the Library to obtain meaningful measurement and oversight of investment status and progress, as well as accountability for results. Protecting its data and information systems is a key objective for any federal agency. This is essential not only to defend an agency’s operations against disruption by cyber attacks, but also to protect sensitive information entrusted to it by members of the public. To protect their systems and information, agencies should establish information security and privacy programs and effectively implement management and technical security and privacy controls. Toward this end, NIST has developed guidance to assist federal agencies in developing and implementing information security and privacy programs. Consistent with NIST guidance, the Library established security and privacy programs by delineating roles and responsibilities and developing policies and procedures. However, it has not fully implemented management controls to ensure the protection of its systems and information. Specifically, while the Library did establish and implement a process for handling security incidents, it did not (1) have a complete inventory of its systems for purposes of monitoring security controls, (2) fully outline security controls in system security plans, (3) conduct complete security testing of its systems, (4) develop and complete in a timely fashion plans for remediating identified security weaknesses, (5) establish contingency plans for its systems, (6) fully document security training policies or ensure that all users had taken required training, (7) include security-related requirements in all applicable contracts for IT services, or (8) fully assess risks to the privacy of personal information in its systems. Further, we identified numerous weaknesses in technical security controls at the Library related to preventing unauthorized access to and securely configuring systems. Until it addresses these weaknesses, the Library’s systems and the information they contain will be at increased risk of compromise. NIST guidance calls for agencies to develop, document, and implement programs for securing information systems, and protecting the privacy of personal information in those systems. With respect to information security, such a program should include risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout an information system’s life cycle. Additionally, information security programs should include a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in information security policies, procedures, and practices. Regarding privacy, according to NIST guidance, organizations should implement a broad set of controls to ensure the appropriate use and protection of personal information maintained by the organization. This includes establishing an organization-wide program overseen by a chief privacy officer, conducting privacy impact assessments to assess the privacy risks associated with collecting and using personal information, and providing an organized and effective response to privacy incidents. The Library took steps to establish protections for its systems as part of its information security program. It assigned overall responsibility for securing the agency’s information and systems to appropriate officials, including, among others, the Librarian, who is responsible for ensuring that the Library’s security program is being implemented; Deputy Librarian, who is responsible for enforcing the Library’s CIO, who is responsible for overseeing the Library’s program; and Chief Information Security Officer (CISO), who is to act as the single point of contact for all information security activities. Additionally, Library business owners are responsible for ensuring that systems they are responsible for are developed in accordance with, and comply with, Library information security policies. The Library also documented information security policies and procedures to safeguard its information and systems and to reduce the risk and minimize the effects of security incidents. For example, the Information Technology Security Policy of the Library of Congress established the agency’s overall information security program and sets ground rules under which it is to operate and safeguard its information and information systems to reduce the risk and minimize the effect of security incidents. In addition, the Library’s General Information Technology Security identifies specific IT control requirements for all information systems, including measures and controls designed to respond to any incidents that occur and recovery of information resources in the event of a disaster. The Library recently updated this directive to align with the latest revision to NIST’s guidelines for building effective security plans, which according to NIST outlines expanded security and privacy controls and provides a more holistic approach to information security. The Library has also taken steps to protect the privacy of data processed by its systems. It designated the Library’s General Counsel as the agency’s Chief Privacy Officer, which includes overall responsibility for managing the protection of personally identifiable information (PII) maintained by the Library’s systems. The agency also documented a policy for protecting PII and responding to reports of unauthorized access or improper disclosure of PII. Specifically, the Library regulation Protection and Disclosure of Personally Identifiable Information establishes, among other things, the following requirements for the Chief Privacy Officer and service units: Incident handling: Any known or suspected unauthorized access to or improper disclosure of PII must be reported by the impacted service unit immediately to both the Chief Privacy Officer and the IG, who are to coordinate a response to minimize any harm. The Library has also developed guidance for responding to privacy incidents relating to information systems. PII training: The Chief Privacy Officer and service units are responsible for the provision of PII training to Library employees. Assessment of privacy risks: Service units are required to identify PII and the purposes for which it is used, assess the sensitivity of the information, and determine appropriate levels of protection. Separately, the Library’s General Information Technology Security Directive requires system owners to conduct privacy impact assessments for all systems in order to mitigate privacy risks. Oversight of privacy activities: The Chief Privacy Officer has overall responsibility for all of the Library’s privacy information activities, including the assurance of privacy policy compliance. Although the Library established security and privacy programs for information systems, it did not fully implement management controls associated with these processes. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly report and respond to them before significant damage is done. In addition, analyzing security incidents allows organizations to gain a better understanding of the threats to their information and the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. Incident reports can be used to provide valuable input for risk assessments, help in prioritizing security improvement efforts, and illustrate risks and related trends for senior management. NIST guidance recommends that agency information security programs include, among other things, procedures for reporting and responding to security incidents. The Library has established and implemented an incident handling process. Specifically, it developed procedures for reporting and responding to security incidents. Additionally, for the 22 selected incidents we reviewed, the Library followed these procedures, to include documenting, analyzing, halting the spread of or limiting the damage caused by, and recovering from the incidents, as appropriate. As a result, the Library has increased assurance that it will be able to promptly report and respond to intrusions and misuse before significant damage is done. According to NIST guidelines, agencies should develop and maintain an inventory of their information systems. A complete and accurate inventory of major information systems is a key element of managing the agency’s IT resources, including the security of those resources. The inventory can be used to track agency systems for purposes, such as periodic security testing and evaluation, patch management, contingency planning, and identifying system interconnections. Further, ITS policy requires the CISO to develop an inventory that includes all general support systems and major applications. However, the Library’s inventory of its information systems was not complete and accurate. In particular, an inventory maintained by the CISO did not include systems identified in inventories maintained by Library Services. For example, the list maintained by the CISO had 30 Library Services systems, but the list provided to us by Library Services in May 2014 identified 46 systems. After we raised the discrepancy with Library Services, officials from that service unit provided us with a revised list of 70 systems. Moreover, none of the lists maintained by the CISO or Library Services included the networks used by the overseas offices, and the Chief of the Library Services Automation and Planning Office acknowledged that these systems have not been certified and accredited. According to the CISO, he did not know about some of the missing systems until they were brought to his attention by GAO. The CISO noted that there were a few systems that needed to be included in the system list. He added that in fiscal year 2015 the Library plans to implement a new system for managing its security program, which will include scanning the Library’s network to identify its systems. In addition, the CISO stated that many of the systems not included in the various inventories are legacy systems that have been exempted from performing key information security management processes. Specifically, according to Library policy, all low-impact operational IT systems implemented prior to August 20, 2004, that have not undergone a major change (e.g., software or hardware upgrade) are not required to undergo a triennial certification and accreditation. However, without a complete inventory that includes all legacy systems, there is increased risk that the Library will not be able to track legacy systems that undergo a major change and ensure that appropriate controls are put in place to protect them. Further, the CISO stated that the Library is taking steps to mitigate risks at its overseas offices. For example, according to the CISO, the Library is performing vulnerability scans of the offices, upgrading the operating system of their workstations, and routing their e-mails through the Library’s firewalls. Although these steps should improve the security of these offices, the Library will not be able to identify all risks associated with the overseas offices—and thus ensure that controls have been implemented to appropriately mitigate those risks—without certifying and accrediting the networks for those sites. Until the Library has a complete and accurate inventory of its systems, it cannot ensure that the appropriate security controls have been implemented to protect these systems. The objective of system security planning is to improve the protection of IT resources. A system security plan is to provide a complete and up-to- date overview of the system’s security requirements and describe the controls that are in place—or planned—to meet those requirements. According to NIST guidelines, system security plans should include descriptions of how security controls are implemented and, for controls recommended by NIST but not implemented, a justification for why the controls were deemed not necessary for the system. Further, to the extent that a system relies on controls established for another system, known as inherited or common controls, NIST guidelines call for describing those controls, noting that organizations should assess how effective they are for the new system being planned and identify compensating or supplementary controls as needed. Library policy does not fully address NIST guidelines. Specifically, although Library policy calls for system security plans to describe or reference security controls that fulfill the security requirements of the system, it does not explicitly require plans to describe common controls. This weakness in Library policy was reflected in the system security plans for most of the nine selected systems that we reviewed. only two of the nine plans—the plans for the Application Hosting Environment and the OSEP Physical Security Network—described all of the common controls on which those systems relied. By contrast, the security plan for the Enterprise Infrastructure General Support System did not always describe controls that were inherited; instead, the plan said that the controls were inherited from the information security program without identifying what system was responsible for implementing them. Additionally, the security plan for eCO did not always describe controls inherited from the Library of Congress Data Network and the Application Hosting Environment. In addition, the plan for one system did not always include descriptions of how security controls were implemented. Specifically, for PICS/NIOSS the system security plan identified controls that were implemented, but did not include associated descriptions. As previously mentioned, the nine systems we reviewed were the ITS Library of Congress Data Network, OSEP Physical Security Network, CRS Enterprise Infrastructure General Support System, ITS Application Hosting Environment, ITS Library of Congress Office Automation System, Copyright Electronic Copyright Office (eCO), Library Services System Management Information Network II (SYMIN II), NLS Production Information Control System/NLS Integrated Operations Support System (PICS/NIOSS), and Office of the Chief Financial Officer Momentum. he acknowledged that the description of these controls in the plan was not acceptable. The CISO acknowledged the weaknesses with the plans and stated that, until recently, he did not have the resources needed to audit these plans. This official added that the Library recently hired an IT specialist with previous experience reviewing certification and accreditation packages, including security plans, and that this specialist has started to audit these packages. Without complete system security plans, it will be difficult for agency officials to make fully informed judgments regarding the risks involved in operating those systems, increasing the risk that the confidentiality, integrity, or availability of the systems could be compromised. A key element of an information security program is regular testing and evaluation to ensure that systems are in compliance with policies and that the policies and controls are both appropriate and effective. Such testing demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies areas of noncompliance and ineffectiveness requiring remediation. NIST guidance emphasizes that agencies should regularly test the implementation of security controls to determine the extent to which they are implemented correctly, are operating as intended, and meet security NIST also notes that security testing should assess both requirements.the controls implemented by a system and those inherited from other systems. The Library has taken steps to establish a policy on security testing that is consistent with NIST guidance, but has not finalized guidance on how the policy is to be implemented. Until recently, Library policy required that security testing of all controls for a particular system be conducted as part of the system’s triennial certification and accreditation process. However, in November 2014, the Library revised its policy to require near-real-time testing—an approach commonly referred to as continuous monitoring.Specifically, the Library now requires service units to assess the risk associated with each security control through a continuous monitoring program and perform testing as frequently as needed in order to appropriately mitigate the risks. According to the CISO, this policy is to be implemented by service units when the certification and accreditation of their systems are due to be renewed. Although the Library has established policy for continuous monitoring, its guidance on how service units are to carry out this policy has not been finalized. Additionally, the Library did not always follow its policy. In particular, each of the nine selected systems inherited security controls relating to the Library’s information security program, but, according to the CISO, the Library has not assessed these inherited controls to ensure that they have been appropriately implemented. The CISO acknowledged that these controls should be tested periodically and stated that the Library plans to do so as part of the implementation of a new system for managing its information security program, which is to occur in fiscal year 2015. Additionally, the Library’s security testing did not always identify control weaknesses. For example: Although all nine selected systems’ most recent security testing documentation reported that appropriate background investigations had been performed, we identified seven individuals with elevated privileges to three systems—Library of Congress Data Network, PICS/NIOSS, and OSEP Physical Security Network—for which the Library did not have a record of a background investigation. Although seven systems’ most recent security evaluations reported that privacy impact assessments had been developed as appropriate, we found that four of these systems—Library of Congress Office Automation System, eCO, SYMIN II, and Momentum—had never completed such an assessment. Further, the Library did not complete security assessments in a timely manner for three systems. As of January 2015, three systems—SYMIN II, Library of Congress Office Automation System, and Library of Congress Data Network—had not completed security assessments consistent with Library policy, which requires such assessments to be performed at least every 3 years. With respect to SYMIN II, the CISO stated that ITS opened a remedial action plan that tasks Library Services with completing this assessment. Regarding the Library of Congress Office Automation System and Library of Congress Data Network, the ITS Assistant Director for Operations signed waivers for the systems that extended the deadline for completing the security assessments to October 2015 and July 2015, respectively. This was because the contractor to be used to perform the testing was not available as it was performing testing on the Library’s financial hosting environment. Although, to its credit, the Library analyzed and accepted the risk of not performing testing as scheduled, such lapses between testing can significantly increase the risk that exploitable weaknesses will not be identified and addressed in a timely manner. Without comprehensive and effective testing, the Library does not have reasonable assurance that its security controls for the selected systems are working as intended, increasing the risk that attackers could compromise the confidentiality, integrity, or availability of the systems. When a security weakness is identified as part of security testing, agencies should develop a remedial action plan, also known as a plan of action and milestones (POA&M) to address the issue. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. NIST guidance emphasizes the use of such plans in order to document the organization’s planned actions to correct identified weaknesses. The Library has established a policy for developing and monitoring remedial action plans. According to Library policy, when weaknesses are discovered during a security assessment, a POA&M must be produced, to include a schedule for implementing any mitigation. However, the Library did not always follow its policy. Specifically, eight of the nine systems that we reviewed had POA&Ms that were delayed and, in many cases, POA&M items were over a year past their expected completion date. For one system—the OSEP Physical Security Network—although OSEP’s 14 open POA&M items from its security testing in September 2013 were to be completed by September 2014, according to the CISO, OSEP has not reported any updates for these items since they were opened in September 2013. Additionally, as of December 2014, of the 229 items included in the POA&Ms for the other eight selected systems, 49 had a status of “delayed.” Of particular concern are the 28 POA&M items for PICS/NIOSS that were identified in 2011 and have yet to be completed. Table 5 shows the number of delayed POA&M items for the other eight selected systems. The CISO acknowledged that POA&M closure has been a known issue for some time, noting that some items have been open for multiple years. As previously mentioned, Library business owners are responsible for ensuring that their systems are in compliance with Library information security policies. At the conclusion of our review in March 2015, the interim CIO stated that she received briefings on the status of POA&Ms in February and March 2015 and will meet with the heads of service units to review older POA&M items and discuss their resolution. Until weaknesses with the Library’s remediation of vulnerabilities have been resolved, they will compromise the ability of the agency to track, assess, and accurately report the status of the agency’s information security program. Under NIST guidance, after testing is completed, organizations are to compile an authorization package—composed of the security plan, testing report, and POA&M items—for the system’s authorizing official to review. The authorizing official is a senior official or executive with the authority to formally assume responsibility for operating an information system at an acceptable level of risk. According to NIST guidance, if the authorizing official, after reviewing the authorization package, deems that the risks (e.g., unaddressed vulnerabilities) are acceptable, an authorization to operate is issued for the information system. The information system is authorized to operate for a specified time period in accordance with the terms and conditions established by the authorizing official. Additionally, NIST guidance states that authorizing officials can also deny authorization to operate for an information system or, if the system is already operational, halt operations if unacceptable risks exist. To its credit, the Library’s policy is consistent with NIST guidance; specifically, it requires authorization packages to be created prior to receiving authorization to operate. that, until the system has authorization to operate, it cannot be deployed as an operational system. Until recently, Library policy required that systems be reauthorized every 3 years as part of the certification and accreditation process. In November 2014, as part of the Library’s adoption of continuous monitoring, the Library revised its policy to require that, after the initial authorization to operate is in place, systems only be reauthorized in the event of a major change (e.g., software or hardware upgrade). According to draft Library guidance on implementing continuous monitoring, in place of the 3-year reauthorization cycle, authorizing officials will review the reported security status on an ongoing basis to form a continuous authorization decision. Additionally, Library policy states However, the Library did not always consistently implement its policy. As of January 2015, four systems—SYMIN II, eCO, Library of Congress Office Automation System, and Library of Congress Data Network—were operating without a current authorization. Library of Congress, Information Technology Security Directive 01. With respect to SYMIN II, it did not have this authorization because, as previously mentioned, it had not completed its security testing. Regarding eCO, the Deputy Director of the Copyright Office’s Technology Office signed a waiver moving the date for the authorization package to be completed from May 2014 to May 2015. The waiver cited multiple upgrades that were to occur between May 2014 and May 2015. Regarding the Library of Congress Office Automation System and the Library of Congress Data Network, the ITS Assistant Director for Operations signed waivers for the systems that allowed them to postpone their authorization to operate to October 2015 and July 2015, respectively. The waivers cited the need to complete testing, which was delayed because the contractor used to perform that testing was engaged in security testing for another system. Additionally, for two systems—the OSEP Physical Security Network and the Application Hosting Environment—the Library did not ensure that authorizations to operate were signed in a timely manner. With respect to the OSEP Physical Security Network, although the system has been operational since 2003, the Library did not authorize the system to operate until February 2015. This is particularly concerning because the Library has classified this system as high impact—that is, it has determined that the loss of the system’s confidentiality, integrity, or availability could be expected to have a catastrophic effect on organizational operations, organizational assets, or individuals. In a written response, the Library stated that, although OSEP completed an authorization package for this system in September 2013, the authorization was not completed until February 2015 because of a lack of program oversight. Regarding the Application Hosting Environment, although ITS signed the authorization to operate for the system in October 2014, this was 4 months later than allowed by Library policy. During this time, the Application Hosting Environment continued to operate. Similar to eCO, the Director of ITS signed a 4-month waiver that extended the authorization to operate, citing the need for additional time to finalize the authorization package. The CISO acknowledged that these systems did not have authority to operate, but noted that, instead of just extending the authorization, the Library requires service units to sign a waiver reflecting a risk-based decision to continue operating the systems without authorization. Although extending the time required to obtain authorization to operate may occasionally be valid, the Library’s persistent use of these waivers increases the probability that risks, such as unaddressed vulnerabilities, are not being communicated to management. This concern is heightened by the sometimes extended length of time associated with these delays. The CISO also added that these weaknesses should not recur once the Library implements its continuous monitoring program, because service units will not need to reauthorize systems at the end of each certification and accreditation cycle. Instead, the Library’s continuous monitoring program will allow authorizing officials to review the reported security status on an ongoing basis to make continuous authorization decisions. If the Library fully establishes and implements its continuous monitoring program, it will be better positioned to ensure that continuous authorization decisions are fully informed. However, as previously mentioned, although the Library has established policy for continuous monitoring, its guidance on how service units are to carry out this policy has not been finalized. Until its approach to continuous monitoring is fully implemented, the Library will not have assurance that appropriate officials have been informed of system risks and that these officials have either accepted these risks and assumed responsibility for them, or halted system operations until the risks are acceptable. Contingency planning controls are intended to provide assurance that, when unexpected events occur, essential operations can continue without interruption or can be promptly resumed and that sensitive data are protected. Losing the capability to process, retrieve, and protect electronically maintained information can significantly affect an entity’s ability to accomplish its mission. If contingency planning controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete information. According to NIST guidelines, agencies should develop contingency plans for their information systems that, among other things, provide established procedures for assessment and recovery of systems following a system disruption. The Library has developed a policy on contingency planning that requires system owners to ensure, for each IT system under their purview, the development and maintenance of a contingency plan. These plans should include, among other things, procedures for ensuring that the systems are successfully recovered. However, only three of the nine selected systems—Enterprise Infrastructure General Support System, eCO, and Momentum—had a contingency plan that addressed NIST guidance and Library policy. For three of the systems—the Application Hosting Environment, Library of Congress Data Network, and the Library of Congress Office Automation System—their security plans indicated that contingency planning was to be addressed in the Library’s Information Technology Continuity of Operations Plan; however, this plan does not include specific procedures for recovering these systems. Additionally, three systems—OSEP Physical Security Network, SYMIN II, and PICS/NIOSS—have not established contingency plans. The CISO acknowledged that these systems did not have contingency plans and stated that he will open POA&Ms for them to be created. Until it develops contingency plans for its key information systems, the Library may have delays in recovering systems or may be unable to recover systems entirely in the event of a large disaster. According to NIST guidelines, agencies should provide basic security awareness training to all information system users as part of initial training for new users, and regular refresher training to all users on an agency-defined basis. This training should inform personnel, including contractors and other users of information systems supporting the operations and assets of an agency, of information security risks associated with their activities and their roles and responsibilities to effectively implement the practices that are designed to reduce these risks. In addition, NIST guidelines call for organizations to administer basic privacy training on a regular basis. The Library has established policies and procedures that generally address NIST guidelines on security awareness training. Specifically, Library policy requires all personnel, including staff, contractors, and volunteers with access to Library of Congress IT systems, to complete the IT security awareness training on an annual basis. The Chief Privacy Officer and CISO told us that privacy is also covered in the Library’s annual security awareness training, and the CISO provided a copy of the fiscal year 2014 training, which addresses employee responsibilities for handling PII. However, the Library did not ensure that all required users completed security awareness and privacy training. Of the personnel tracked in its database of record, the Library estimated that 4,131 of 4,145 users (99.7 percent) completed the required awareness training in fiscal year 2014. However, we identified 1,345 user accounts—204 from OSEP, 42 from CRS, and 1,099 from the rest of the Library—with access to Library IT systems that were not tracked in the database. Library officials were unable to provide comprehensive information on how many of the additional personnel had completed the required awareness training. Regarding the 204 accounts in OSEP, the Library reviewed 47 accounts that we identified. Of those, the Library found 8database of record, and only 3 of those individuals reportedly took the training in fiscal year 2014. According to the Director of Workforce Performance and Development, many of these accounts are Capitol Police personnel who have access to OSEP’s Physical Security Network, but who are not tracked in the Library’s database of record. The CISO stated that the Library does not provide security awareness training to these users. Instead, they rely on the Capitol Police to provide adequate training. Additionally, OSEP stated that, although the Capitol Police provide training to their staff, the Library does not ensure that all Capitol Police users of Library systems have completed the training. in its Of the 42 accounts from CRS, the Library found 28 names in its database of record—only 12 of which reportedly took the training in 2014. With respect to the 1,099 accounts for the rest of the Library, the Library reviewed 103 accounts. Of those, the Library found 55 in the database of record, and only 14 of those individuals reportedly took the training in 2014. The Library cited multiple reasons for the differences between the accounts we identified and the training database of record. According to the Director of the Office of Workforce Performance and Development, some of the accounts that we identified appeared to be associated with personnel who no longer work for the Library. Additionally, that official stated that, in some cases, the user name for an individual was different in the training database of record and the database used to authorize access to Library systems. At the conclusion of our review, the Assistant Director for Human Resource Operations stated that the Human Resources Services, ITS, and other appropriate offices will assemble a complete and accurate list of staff, contractors, and others with access to Library networks in fiscal year 2014. That official added that Human Resources Services will also implement a process for obtaining a complete and accurate listing of staff for the next cycle of training. Until the Library ensures that all personnel with access to its network take security awareness training, it will have less assurance that they have a basic awareness of information security issues and agency security and privacy policies and procedures. The Library relies on the services of contractors to operate and secure its computer systems on its behalf. While contractor personnel who operate systems and provide services to federal agencies can provide significant benefits, they, as with government employees, can also introduce risks to agency information and systems, such as the unauthorized access, use, disclosure, and modification of federal data. In order to ensure that contractors meet information security and privacy requirements, NIST recommends that organizations include information security and privacy requirements in their contracts for IT systems and services. Toward this end, Library policy calls for all IT contracts to require contractors to comply with Library security and privacy requirements. Additionally, the Library has developed standard sections addressing NIST guidelines that are required in all IT solicitations. However, contracts for eight of the nine selected systems we reviewed did not fully address Library security and privacy requirements for IT system and services contracts. Specifically, only one contract—for PICS/NIOSS—included the standard sections that Library policy requires. In a written response, the Library agreed that contracts for eight systems did not address Library requirements and explained that this occurred because internal reviewers did not consistently identify the missing information. The Library also has made draft revisions to its contractual security requirements because officials determined that the prior requirements were overly broad. These revisions are consistent with NIST guidelines. For example, the standard sections require the contractor’s work to be conducted in accordance with the latest version of NIST’s information security and privacy controls. The Library told us that the Office of General Counsel is to review these requirements for promulgation in fiscal year 2015. However, the Library has yet to establish a date for finalizing these requirements. In the interim, the Library stated that service units are to review all current contracts for IT systems and services to ensure that the current requirements have been incorporated. The Library added that the Office of Contracts and Grants Management, with support from the Office of General Counsel, will continue to review statements of work for IT systems and services and identify any potential gaps in IT security requirements prior to contract award. Until the Library finalizes its standard contract sections for information security and privacy and ensures that contracts for IT systems and services include these provisions, it increases the risk that meeting enterprise-wide security requirements could require costly contract modifications or that these requirements will not be implemented according to Library policy. According to NIST guidelines, agencies should assess privacy risks of an information system when developing a privacy impact assessment.These risk assessments are intended to help program managers and system owners identify privacy risks and techniques to reduce those risks. Library policy is consistent with NIST guidance. Specifically, it calls for privacy impact assessments to be performed for all Library systems. However, the Library only conducted privacy impact assessments for two of the nine selected systems we reviewed—Enterprise General Support System and PICS/NIOSS. As previously mentioned, the security tests for four systems—Library of Congress Office Automation System, eCO, SYMIN II, and Momentum—reported that privacy impact assessments had been developed as appropriate; however, when asked for copies of these assessments, Library officials responsible for these systems stated that privacy impact assessments had not been performed. According to the CISO, POA&M items have been opened for privacy impact assessments to be performed on these systems. Security testing for the OSEP Physical Security Network stated that the system did not have a privacy impact assessment and this has been an open POA&M item since September 2013. Security testing for the Application Hosting Environment determined that the system did not have a privacy impact assessment. In describing the risk associated with this weakness, the test report stated that, although there may be systems hosted on the Application Hosting Environment that collect, process, or store PII, these systems have their own privacy impact assessments. However, as previously noted, eCO, which is hosted on the Application Hosting Environment, did not have a privacy impact assessment. The testing report also recommended that the Library conduct a privacy impact assessment to verify that the Application Hosting Environment does not collect, process, or store PII independent of the systems that it hosts. Regarding the Library of Congress Data Network, its security plan states that this control is not applicable because the system does not collect, maintain, or disseminate information—it only transfers data from one place to another. However, NIST guidance calls for privacy impact assessments to be performed to assess risks resulting not only from the collection, storing, or use of PII, but also the transmission of such data. One reason for the inconsistent performance of privacy impact assessments is the lack of oversight to ensure compliance with Library- wide privacy policy and requirements. According to the Library’s General Counsel, who also serves as the Chief Privacy Officer, the Office of General Counsel does not review the Library’s privacy program because it is not required to do so. Rather, that official told us that she relies on the service units to carry out their responsibilities. However, according to Library policy, the Chief Privacy Officer has overall responsibility for all of the Library’s privacy information activities, including ensuring PII policy compliance.shall have overall responsibilities for managing the protection of PII maintained in the Library’s systems and files. Additionally, the policy states that the Chief Privacy Officer Until the Chief Privacy Officer establishes and implements a process for reviewing the Library’s privacy program, including ensuring that privacy impact assessments have been conducted for all IT systems, PII collected by the Library will be at increased risk of compromise. A basic management objective for any agency is to protect the resources that support its critical operations and assets from unauthorized access. An agency can accomplish this by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computer resources (e.g., data, programs, equipment, and facilities), and securely configure information systems, thereby protecting them from unauthorized disclosure, modification, and loss. Controls relating to these areas include policies, procedures, and protections regarding authorization, identification and authentication, cryptography, background investigations, and environmental safety. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of sensitive information and information systems supporting the Library’s mission. The Library did not effectively implement or securely configure key security tools and devices on the nine selected systems to sufficiently protect users and information from threats to confidentiality, integrity, and availability. Specifically, weaknesses existed in the following control categories: Authorization: The Library did not always establish and implement a process for documenting approvals for elevated permissions to selected systems. NIST guidance and Library policy call for such a process in order to ensure that only authorized users can access a system. Specifically, only one of the nine selected systems— Momentum—provided records documenting who approved accounts with elevated privileges and why those accounts were created. At the conclusion of our review, the Library acknowledged in a written response that it had not fully established and implemented such a process. The Library stated that the IT Security Group has requested account creation procedures from all information system security officers and will create a POA&M for all systems without these procedures. Until the Library establishes and implements a process for documenting elevated permissions, in the event of an incident the Library may not be able to determine if an account was appropriately created or had been accidentally or maliciously assigned inappropriate permissions. Identification and authentication: The Library did not always require two-factor authentication for access to sensitive Library resources. NIST recommends using multifactor authentication for users to Until the Library consistently uses two- access network resources.factor authentication, there is increased risk that its systems will not limit access appropriately. Cryptography: The Library did not always ensure that sensitive information transmitted across its network was being adequately encrypted. NIST recommends that organizations employ cryptographic mechanisms to prevent unauthorized disclosure of information during transmission. In a written response at the conclusion of our review, the Library acknowledged this weakness and opened a POA&M to address it, with a scheduled completion date of July 2015. Until the Library addresses this weakness, there is increased risk that an individual could capture information, such as user credentials or other sensitive data, and use the information to gain unauthorized access to data and system resources. Background investigations: As previously mentioned, the Library did not perform background investigations for seven individuals with elevated privileges to three systems—Library of Congress Data Network, PICS/NIOSS, and OSEP Physical Security Network. NIST guidance and Library policy call for personnel to undergo background screening commensurate with their level of access to Library systems in order to ensure that they are trustworthy and meet established security criteria. At the conclusion of our review, the Library said in a written response that it would take action to address this by performing background investigations for six of the individuals. The Deputy Chief of OSO stated that the office removed elevated privileges to the OSEP Physical Security Network for the remaining individual. Environmental safety: The Library did not ensure that an annual inspection of the fire suppression system for the primary data center was performed in a timely manner. NIST recommends that organizations employ and maintain fire suppression and detection systems for the information systems and regularly inspect those systems for deficiencies. After we informed the Library of this issue, an inspection of the system was performed. In addition to the above weaknesses, we identified other security weaknesses in controls related to authorization, configuration management, boundary protection, patch management, and physical security that limit the effectiveness of the security controls on the selected systems and unnecessarily place sensitive information at risk of unauthorized disclosure, modification, or exfiltration. We intend to issue a separate report with limited distribution to describe in greater detail the control weaknesses we identified during this review. Recognized industry best practices call for ensuring that an organization’s IT services are aligned with and actively support its business needs. As the central IT organization within the Library, the Office of Strategic Initiative’s Information Technology Services (ITS) directorate is responsible for providing an array of IT services to other units within the Library. However, ITS has not ensured that its services support the business needs of the Library. While it has developed a catalog that identifies the services it provides to other units within the agency, it has not established service-level agreements for all these services that include agreed-upon performance targets. The Library has drafted a new policy for such service-level agreements, but it has yet to be finalized. Further, our survey of service units within the Library revealed that they were often not satisfied with the services provided by ITS. Although ITS has begun to conduct customer satisfaction surveys, it has not developed a plan for improving satisfaction with its services Library-wide. Moreover, inconsistent satisfaction with the services provided by the Library’s central IT office has likely contributed to duplicative or overlapping efforts across the Library. Specifically, units across the Library performed many of the same functions as ITS, including maintaining their own networks and servers, purchasing duplicate copies of desktop software, and maintaining duplicative security solutions. The development and implementation of Information Technology Infrastructure Library practices are widely recognized as hallmarks of successful public and private IT organizations.key practices to ensure that IT services are aligned with the business needs of an organization and actively support them. For example: A service catalog identifies all current IT services delivered by the service provider to its customers. A service-level agreement (SLA) establishes agreement between an IT service provider and a customer to describe the IT services, specify the responsibilities of both parties, and document the expected service-level targets.agreements are to be structured such that IT services and customers are covered in a manner best suited to the organization’s needs (e.g., one agreement for each service or one for each customer). Organizations should define how these As the central IT organization within the Library, ITS provides services to the various service units. According to Library policy, ITS management and staff are to work to satisfy customer requirements, provide outstanding customer service, and represent customer interests. The Director of ITS stated that ITS has adopted Information Technology Infrastructure Library practices in order to meet customer needs. Although ITS has developed a service catalog, it has not fully established SLAs. Service Catalog: To its credit, ITS developed a service catalog that captures its current IT services. Specifically, the catalog identified 31 administrative, management, and technical IT services that are available to ITS customers. For example, ITS provides services that cover service desks (e.g., problem management), backup and recovery, and network services (e.g., design, construction, security and maintenance). SLAs: The Library has not defined a structure for ensuring that IT services and customers are covered by SLAs in a manner that meets the service units’ needs. In the absence of such a structure, ITS has established 19 SLAs with individual service units, each of which describes the services that ITS will provide and the roles and responsibilities of each party. For example, ITS established an SLA with the Copyright Office for the management, operation, maintenance, and security of eCO. The SLA identifies six services— including services relating to disaster recovery and database management—and describes the roles and responsibilities for both ITS and Copyright. However, the 19 SLAs do not fully address all IT services and customers, or always establish expected service-level targets. Specifically: The SLAs do not address all of the services in ITS’s service catalog. Specifically, the agreements collectively only address 14 of the 31 services ITS provides to the Library of Congress. For example, ITS does not have a SLA that addresses the services it provides CRS for its Enterprise Infrastructure General Support System. The SLAs do not include service-level targets for all services. Specifically, only 9 of the 19 SLAs contained such targets, covering 3 of the 31 services. For example, according to the SLA governing the management of eCO, in the event of a disaster that affects IT operations in the main data center, ITS is to recover the eCO system at its alternate computing facility within 24 hours of the disaster. However, ITS did not establish targets for any of the other 28 services in its service catalog, such as the amount of time systems are available as part of its hosting service. According to the ITS Assistant Director for Operations, ITS recently established service-level targets for one additional service: the service desk. These targets pertained to how quickly problems should be resolved, depending on their severity. He added that these targets reflect ITS’s intent to provide the same level of service to all of its customers. However, the service units have not agreed to these targets. According to the ITS Assistant Director for Operations, he briefed the services units on these targets and received feedback, but acknowledged that ITS did not establish formal agreements with the service units. Without agreement from the service units, it is unclear whether these service-level targets will meet the unique business needs of the Library’s service units. The ITS Assistant Director for Operations further stated that ITS has drafted a policy documenting its approach to developing SLAs. If approved, the policy will call for ITS to continue to develop two types of SLAs—(1) those with individual service units to define a unique service, and (2) SLAs with the Library of Congress as a whole. Although the draft policy calls for the use of service-level targets for Library-wide SLAs, it does not do so for SLAs with individual service units. Moreover, given that the policy is to govern ITS, it is unclear whether the policy will reflect an SLA structure that covers the IT services and customers in a manner best suited to meet the needs of both ITS and its customers. For example, the policy would no longer allow ITS to enter into agreements with individual service groups within a service unit—such as NLS. At the conclusion of our review, the ITS Assistant Director for Operations told us that ITS submitted the policy to the Library’s interim CIO in January 2015. He also stated that the ITSC and Executive Committee will complete their reviews by May 2015 and that SLAs under the new policy will be completed by September 2015. Until the Library establishes and implements an SLA structure—to include the use of service-level targets—that meets the needs of the organization, there is increased risk that ITS will not provide services that meet the needs of its customers. In the case of the Copyright Office, this risk has been realized. For example, according to the Copyright CIO, the Library controls when eCO is shut down for maintenance and these have, at times, been scheduled during periods of heavy traffic from the office’s external users. The weaknesses in ITS’s implementation of service-level management practices were reflected by inconsistent satisfaction with the services that it provides. To be successful, IT organizations should measure the satisfaction of their users and take steps to improve it. In this regard, effectively managing activities to improve user satisfaction requires planning and executing such activities in a disciplined fashion. The Software Engineering Institute’s IDEALSM model is a recognized approach for managing efforts to make system improvements. According to this model, user satisfaction improvement efforts should include a written plan that serves as the foundation and basis for guiding improvement activities, including obtaining management commitment to and funding for the activities, establishing a baseline of commitments and expectations against which to measure progress, prioritizing and executing activities and initiatives, determining success, and identifying and applying lessons learned. Through such a structured and disciplined approach, improvement resources can be invested in a manner that produces optimal results. However, ITS has not demonstrated that user satisfaction improvement efforts are being guided by a documented plan that defines prioritized improvement projects and associated resource requirements, schedules, and measurable goals and outcomes. Instead, efforts that the office undertook to improve user satisfaction were ad hoc and did not meet with success. Specifically, ITS only measures user satisfaction for 1 of the 31 services it provides to other service units (help desk services). In the absence of comprehensive data on ITS customer satisfaction, we surveyed the heads of the Library’s seven service units, as well as the head of NLS, about the extent to which they were satisfied with the IT services provided by ITS. The results showed that ITS’s customers—the Library’s service units—vary in the extent to which they are satisfied with the services provided by ITS, but collectively these customers are generally not satisfied. Specifically, the average score for all IT services provided by ITS was 3.17 (on a 5-point satisfaction scale, where 1 is very dissatisfied and 5 is very satisfied), and the scores for each service ranged from a low of 2.33 to a high of 4.40. More specifically, only 2 of the 29 services had an average score above 4, indicating that service units were generally very satisfied or somewhat satisfied with these services. The majority of the services—19 of the 29 services—ranged from 3.75 to 3.0, indicating that service units were generally neither satisfied nor dissatisfied or somewhat satisfied with these services. Lastly, a little more than a quarter of the services (8 of 29) had an average score below 3, which indicates that service units were generally neither satisfied nor dissatisfied or somewhat dissatisfied. Table 6 shows the average customer satisfaction score for each of the 29 IT services that ITS provides. In addition to providing scores, the survey respondents also provided written comments. Five factors were cited by two or more respondents as contributing to their dissatisfaction with the services provided by ITS: Lack of transparency: Six of the eight respondents cited a lack of transparency from ITS as a source of dissatisfaction with its services. For example, two respondents discussed transparency issues with the Library’s IT Continuity of Operations Planning: one cited issues with testing, while the other discussed a need for more transparency with respect to the decision making on the priority of systems to be recovered in the event of a disaster. Further, one respondent stated that OSI/ITS develops the IT strategic direction for the organization without consulting with the senior leadership of the respondent’s service unit. Poor quality of service: Five respondents cited the poor quality of service provided by ITS as a key source of dissatisfaction. For example, three respondents described problems with the Library’s telework infrastructure, noting that the service is frequently unavailable. Additionally, two respondents stated that the configuration of the software used to prevent access to certain websites results in overbroad restrictions. One respondent also noted that the implementation of the process used to gain access to restricted sites is uneven—some requests are resolved in a timely manner, while others are not. In addition, one respondent stated that during recent data center emergencies, data were lost by ITS. Similarly, according to that respondent, during a power outage in 2009, ITS could not maintain power to the data center because of known problems with one of its emergency power devices, resulting in unplanned outages and disruptions to day-to-day operations. Inconsistent implementation of IT management processes: Five respondents cited inconsistent implementation of IT management processes as one of the reasons for their dissatisfaction with some of the services provided by ITS. For example, two respondents described weaknesses in ITS’s implementation of project management practices, noting that ITS needs more experienced project managers and improvements in cost estimating. Additionally, four respondents cited problems with ITS’s change management practices: two respondents stated that ITS frequently does not follow its established process, one said that the process takes months to review applications, and the other said that more consistent configurations and settings are needed. Furthermore, two respondents stated that the Library has not conducted testing needed for its continuity of operations and disaster recovery process. Additionally, three respondents described challenges relating to the certification and accreditation process, citing problems with cost, timeliness, and inconsistent implementation. Inconsistent communication: Four respondents cited inconsistent communication as a reason for dissatisfaction with some of the services provided by ITS. For example, three respondents stated that ITS did not always effectively communicate when outages in IT systems were to occur. One respondent described instances where they were informed of outages affecting public-facing systems from external customers before they were notified by ITS. In addition, one respondent described instances where they received conflicting information and direction from ITS. Use of outdated technology: Four respondents cited outdated technology used by ITS as a reason for dissatisfaction with its services. For example, two respondents stated that ITS needs to invest in a modern infrastructure; one respondent explained that, without such an infrastructure, the Library will not be able to meet the technical requirements for acquiring and stewarding digital collections. According to ITS, it has recently taken steps to measure and improve user satisfaction. For example, in September 2014, ITS began conducting surveys of customers that used its service desk, and reported that it received very positive feedback on this service. However, ITS has not demonstrated that user satisfaction improvement efforts are being guided by a documented plan that defines prioritized improvement projects and associated resource requirements, schedules, and measurable goals and outcomes. Given the Library’s reliance on IT services provided by ITS, as well as the results of our survey, it is critical that ITS identify and implement improvements in a disciplined and structured fashion. For example, as discussed later in this report, continued dissatisfaction with ITS services may have led customers to perform the services themselves in order to improve their IT performance and decrease their reliance on ITS. Without a documented improvement plan, efforts to improve user satisfaction may be reduced to trial and error, and ITS will not be able to adequately ensure that it is effectively investing resources on improvement efforts that will satisfy users. The lack of an enterprise-wide approach to managing IT, in combination with dissatisfaction with the services provided by ITS, has contributed to other service units independently performing duplicative or overlapping activities in support of their business needs. For example, although ITS is responsible for the Library’s primary IT service desk (which provides IT troubleshooting service to all Library personnel), CRS and Library Services maintain separate service desks for their personnel. According to the CRS CIO and the Chief of Library Services Automation Planning & Liaison Office, although the service desks maintained by CRS and Library Services perform some functions that ITS’s service desk performs (e.g., resetting passwords), their service desks also perform unique functions. However, because they perform some overlapping functions, the Library may be spending more than it needs to on these service desks. As another example, as previously mentioned, according to the official who served as acting CIO from April 2014 to January 2015, each service unit is responsible for managing its own human capital skills. For example, that official told us that, for its own staff needs, OSI identifies skills and competencies when an individual leaves the organization or when OSI plans to hire additional staff. Additionally, although the service units vary in the extent to which they in the past 3 purchase IT, all of them have purchased commodity ITyears. For example, most of the service units have purchased desktops, laptops, and workstation software. Table 7 identifies commodity IT purchased by Library service units and NLS in the past 3 years. As a result, there is increased risk that the service units will make duplicative investments in commodity IT. In the case of monitors and workstation software, this risk has been realized. Because, as previously discussed, the Library did not have an accurate inventory of its non- capitalized IT assets, we visited the Library’s warehouse in Landover, Maryland. At that facility, we observed that, as of December 2014, ITS had approximately 100 24-inch monitors that were purchased in 2010. However, instead of using the monitors purchased by ITS, Library Services purchased 82 additional 24-inch monitors between June 2013 and July 2014. Of particular concern is that Library Services purchased all of these monitors after the Library’s IG issued a report noting this surplus of monitors. According to Library Services, at the time that it purchased the monitors, it was not aware that ITS had 24-inch monitors. It added that ITS previously maintained a “PC Store” from which Library Services acquired computers, monitors, printers, and scanners; however, this service was discontinued about 3 years ago. Since that time, according to Library Services, its attempts to purchase equipment have met with mixed success, and it has often needed to acquire equipment independently. Finally, Library Services stated that it recently became aware of monitors available through ITS and is working with them to obtain as many as possible to meet its needs. The Library has also made duplicative investments in desktop software, which has led the Library to purchase too many licenses. For example, collectively, the Copyright Office, Library Services, and ITS purchased 459 licenses to Microsoft’s Visio 2010 Professional, but as of November 2014 were only using 227. According to the ITS Assistant Director for Operations, service units are responsible for tracking the usage of licenses that they procure. He noted, however, that ITS is implementing a new system to be used to track and analyze data on the usage of software licenses. Additionally, according to the ITS Assistant Director for Operations, in some cases the Library has decided to purchase more licenses than are currently required in anticipation of additional Library employees, existing employees who will be reassigned to roles requiring the licenses, and new contractors who will need the licenses. However, this does not explain why the Library is not using almost half of the licenses it purchased for this application. According to the Copyright Office Chief of Operations, the Copyright Office needs software that can allow access to the many and varied digital formats submitted by registration customers. He added that, in recent years, ITS has increasingly required the Copyright Office to purchase its own licenses for applications that were previously centrally funded and that the Copyright Office does not consistently receive information from ITS regarding what licenses the Library has or how many users are on each license. In addition to purchasing commodity IT, each of the service units and NLS perform many of the same types of IT activities. For example, CRS, NLS, OSO, and OSI manage and support servers. Table 8 identifies key IT activities independently performed by the service units. In performing these activities, the Library has made potentially duplicative investments in IT for four of the eight IT activities identified above. Specifically: Server management and support: ITS and CRS each operate and maintain separate environments for the same server virtualizationsolution: VMware. As another example, OSEP and CRS maintain, separate from ITS’s Application Hosting Environment, their own technical infrastructures for hosting their organizations. Network management: OSEP’s Physical Security Network is completely separate from the rest of the Library’s network. Consequently, OSEP acquires and maintains network devices, many of which would not be needed if its systems were hosted on the Library of Congress Data Network. Additionally, although more integrated with the Library of Congress data network than OSEP’s network, CRS’s Enterprise Infrastructure General Support System also includes a number of network devices that CRS purchases and manages largely independent of ITS. Directory services management: ITS, CRS, and OSEP maintain separate environments for authenticating and authorizing users and computers. These three organizations also maintain separate e-mail environments. Additionally, ITS and OSEP both operate and maintain different solutions for performing two-factor authentication. Internet and web management: Although OSI has a Web Services division, which is responsible for developing strategies, plans, standards, and policies to guide the Library’s web initiatives, the Copyright Office updated its website in July 2014 independently of OSI’s Web Services division. The Special Assistant to the Register of Copyrights stated that this was because OSI did not understand the office’s requirements and needs. However, the Chief of the Web Services division stated that he had met with Copyright staff and attempted to reach agreement on updating the website collaboratively. In addition, although the Copyright Office only performs two of the above- mentioned IT activities, officials have recently expressed their intent to make additional investments in IT that could be duplicative of activities performed by ITS. In particular, the Copyright Office has requested funding for its own software application development environment, as well as a “digital repository” for deposits of works (e.g., films, books, music, photographs, and software) for which copyright owners are asserting ownership and seeking protection. However, ITS has a software application development environment, and currently works with the Copyright Office to maintain digital deposits. Another consequence of potentially duplicative IT activities is that the Library may be spending more than it needs to on IT-related staff. As previously mentioned, the IT activities performed outside of OSI and ITS are performed and led by the 134 IT staff that work for other service units. In fiscal year 2014, the Library spent about $15 million on the salaries for these staff. Table 9 identifies the amount that each service unit spent on salaries for IT staff in fiscal year 2014. As described in more detail below, officials from CRS and OSEP offered various reasons for why they needed to manage much of their IT independent of ITS, and the Copyright Office described reasons why the structure used to manage its IT systems is not adequate. According to the Director of CRS, CRS tries to leverage ITS resources whenever possible, and the Director described the division between ITS and CRS as a “division of labor.” However, the Director also stated that CRS needs to maintain independence when managing its IT because of its unique mission in support of Congress. In particular, the Director stated that CRS must be able to (1) provide information to Congress quickly and (2) keep its information confidential. Regarding the timeliness of CRS’s responses to Congress, a senior advisor to the CRS Director noted that CRS is directed by law to provide efficient and effective service to Congress. With respect to confidentiality, the Deputy Director of CRS told us that CRS considers its information gathering to be covered under the “Speech or Debate Clause” of the U.S. Constitution. Accordingly, that individual stressed the importance of keeping CRS’s information confidential, and expressed concern about storing CRS data with the rest of the Library’s data. To this end, the Director of CRS stated that it must have separate IT because its IT group has a better understanding of what CRS and Congress need than ITS. A senior advisor to the CRS Director added that CRS is directed by law to have “the maximum practicable administrative independence” in performing its duties to Congress. According to this individual, the division between ITS and CRS has evolved over the years as a result of this administrative independence. Regarding OSEP, a senior electronic security engineer explained that it maintains its systems independent of ITS because it has always done so. In particular, that official stated that previous iterations of its camera and physical intrusion detection systems were not integrated with IT and, therefore, OSEP did not need the assistance of ITS. He noted, however, that these systems are now integrated with IT. The OSEP Director said the office is open to a solution that involves ITS but that ITS had expressed a lack of knowledge of security systems. The Director added that OSEP coordinated with ITS on a statement of work for an assessment of staffing, technology improvements, and best practices for its IT. With respect to the Copyright Office, its General Counsel stated in a memo prepared for GAO that the “current IT regime impedes the Copyright Office’s ability to carry out its legal responsibilities.” Among other things, the General Counsel stated that the existing Library IT infrastructure cannot ensure the security or integrity of digital deposits. For example, she explained that the Library has decided to host Copyright IT systems in the same environment as other Library systems, with the result that ITS staff—not Copyright Office staff—are responsible for administering many of the security controls. Additionally, the General Counsel stated that, despite the requirement to be able to retain published works for 75 years and unpublished works for the full term of the copyright, the Copyright Office’s eCO system does not have the capability to validate the integrity of these works. Further, the Library does not have any systems that are capable of storing digital deposits for 75 years or more. These concerns are understandable, given that service units were often not satisfied with the services provided by ITS. Accordingly, service units may believe that pursuing IT solutions and commodity IT independent of ITS is their only viable alternative. Nevertheless, allowing service units to do so likely increases costs and inefficiencies. Our research on reducing duplicative IT investments in the executive branch has found that through the Office of Management and Budget’s PortfolioStat initiative—a process where agencies gather information on their IT investments and develop plans for consolidation and increased use of shared-service delivery models—agencies can avoid duplicative, wasteful, and low-value investments. Congress also recently recognized the value of these reviews when it required executive branch agencies to complete them annually. However, the Library has not performed such a review. Service units’ independent pursuit of IT activities presents an opportunity for the Library to both explore the costs and benefits of the existing duplicative or overlapping IT activities and identify areas for consolidating or eliminating services where appropriate. The individual who served as the Deputy Librarian from June 2012 until December 2014 acknowledged that service units perform IT activities that are duplicative of ITS. The former Deputy Librarian also noted that one of the goals of the draft IT strategic plan that he led the development of was to use shared services to collaboratively establish IT systems that meet common requirements across organizations. The former Deputy Librarian also described actions that he took to consolidate IT management during his tenure: Web Governance Board: According to the former Deputy Librarian, in 2010, he established the Web Governance Board in order to ensure that the Library’s web presence is coordinated across the service units. The Deputy Librarian chaired this board from January 2010 until December 2014. He added that, prior to the development of the board, many of the service units developed their websites independently. Additionally, the former Deputy Librarian stated that he led the development of the Library’s web strategy, which identified three core areas for transforming the Library’s web presence: (1) Congress, (2) National Library, and (3) Copyright. However, as previously mentioned, the Copyright Office updated its website in July 2014 independent of OSI’s Web Services division. Geospatial information systems: The former Deputy Librarian told us that the Law Library, CRS, and Library Services previously pursued geospatial information system solutions independently. However, he tasked these service units with collaboratively implementing a geospatial hosting environment that will enable Library of Congress staff and patrons, as well as Congress, to perform research and analysis using geospatial datasets acquired by the Library. Mobile devices: According to the former Deputy Librarian, in the past, service units independently acquired cell phones for managers. He told us that in 2014, as part of a program to upgrade the Library’s aging cell phones, he required the service units to acquire cell phones using one contract. Although these activities could improve coordination and thus reduce overlap in IT activities throughout the Library, in the absence of a PortfolioStat-type assessment of the costs and benefits of consolidating IT activities, the service units will continue to spend money on IT that may not constitute an efficient use of Library resources. Until such an assessment is completed, the Library will not be able to justify whether its IT spending provides the appropriate balance of meeting business needs and saving taxpayer dollars. Our research and experience at federal agencies indicates that agencies should have a CIO with responsibility for managing their IT—including commodity IT—and clearly define responsibilities between the CIO and officials responsible for IT management at component organizations. Congress has also recognized the need for strong CIOs, and recent legislation has reaffirmed this by strengthening the CIO position in executive branch agencies. However, the Library does not have the leadership needed to address the IT management weaknesses identified in this report. Specifically, the Library’s CIO does not have adequate responsibility for the agency’s IT— in particular, authority over commodity IT and oversight of investments in mission-specific systems made by other service units. Further, responsibilities and authorities of the CIO and personnel responsible for IT management at the service unit level have not been clearly defined. These challenges have been exacerbated by the fact that the Library has had five temporary CIOs since 2012 and by the recent reassignment of the Deputy Librarian, who, in the absence of a CIO, had led a number of IT efforts. After we shared our preliminary results with the Library, the Librarian announced plans to hire a permanent CIO and Deputy CIO. According to the Chief of Staff, the Library plans to appoint these officials by September 2015. While appointing a permanent CIO could potentially address the Library’s gap in IT leadership, the details of this position have yet to be fully defined. Until it establishes strong IT leadership, the Library will continue to face difficulties in addressing its numerous IT management weaknesses. According to our research and experience at federal agencies, leading organizations adopt and use an enterprise-wide approach to managing IT under the leadership of a CIO that includes the following: Responsibility for commodity IT: The CIO should have the responsibility and authority, including budgetary and spending control, for commodity IT across the entity. Consolidating commodity IT under a CIO can help to reduce duplicative services and make it easier for an organization to effectively negotiate with vendors for volume discounts and improved service levels. We have previously reported that, according to CIOs, more control over component-level IT funding, including commodity IT, could help ensure greater visibility into and influence on the effective acquisition and use of IT. Oversight of mission-specific systems: The CIO should have the ability to adequately oversee mission-specific systems to ensure that funds being spent on component agency investments will fulfill mission needs. We previously reported on the importance of agency CIOs having adequate oversight to ensure that funds being spent on IT component agency investments, including mission-specific systems, are aligned with the needs of the organization. Clear relationships between CIO and components: The responsibilities and authorities governing the relationships between the CIO and component organizations should be defined. We have previously reported that the effectiveness of agency CIOs depends in large measure on having clear roles and authorities. Congress has also recognized the importance of having a strong agency CIO. In 1996, Congress passed the Clinger-Cohen Act, which established the position of agency CIO for executive branch agencies and gave these officials responsibility and accountability for IT investments, including IT acquisitions, monitoring the performance of IT programs, and advising the agency head whether to continue, modify, or terminate such programs. information technology acquisition reforms, which required most executive branch agencies to ensure that the CIO had a significant role in the decision process for IT budgeting, as well as the management, governance, and oversight processes related to IT. This legislation also required that CIOs review and approve (1) all contracts for IT or IT services prior to executing them and (2) the appointment of any other employee with the title of CIO, or who functions in the capacity of a CIO, for any component organization within the agency. Although these laws are not applicable to the Library, they demonstrate that Congress recognizes the importance of strong CIOs in federal agencies. Clinger-Cohen Act of 1996, Pub. L. No. 104-106 (Feb. 10, 1996), §§ 5122 & 5125; 40 U.S.C. § 11315 and 44 U.S.C. § 3506(a). Commodity IT: The Library’s CIO does not have responsibility for the Library’s entire commodity IT even though a significant portion of the Library’s IT funding is allocated and spent at the service unit level on commodity IT systems. As previously mentioned, each service unit has independently purchased commodity IT in the past 3 years, and, in some cases, this has led to wasteful spending. Mission-specific systems: The Library’s CIO does not have the ability to adequately oversee mission-specific systems to ensure that funds being spent on component agency investments will fulfill mission needs. As previously mentioned, although the Library has established elements of an investment management process, the ITSC, which is to be chaired by the CIO, does not review all major IT investments. Additionally, as noted previously, the former acting ITSC chair told us that ITSC approvals do not affect decisions to allocate funding for investments, as service units have already secured funding for the investments before the selection process begins. Until the Library gives its CIO adequate visibility into mission-specific systems, there is increased risk that these investments will experience significant cost and schedule overruns, with questionable mission- related achievements. Relationships between CIO and component IT leadership: The Library has not clearly defined the responsibilities and authorities governing the relationships of the CIO and component organizations. In particular, although each service unit performs IT management activities to varying extents, the Library has not defined the relationships between the CIO and those in the service units responsible for those functions. Of particular concern is the lack of defined relationships between the Library’s CIO and the other two CIO positions that exist in the Library—one at the Copyright Office and the other at CRS. Until the responsibilities and authorities governing the relationships between the Library CIO and service unit IT leadership are clearly defined, the Library CIO may not be able to effectively manage and oversee component IT spending. Compounding the lack of CIO authority, the Library has lacked consistent leadership in this position. We have previously noted that one element that influences the likely success of an agency CIO is the length of time the individual in the position has to implement change. For example, in our prior work on agency CIOs, we reported that CIOs and former agency IT executives believed it was necessary for a CIO to stay in office for 3 to 5 years to be effective and 5 to 7 years to fully implement major change initiatives in large public sector organizations. However, since the departure of the most recent permanent CIO in 2012, four individuals have served as acting CIO, and another was recently appointed to serve in an interim capacity until a permanent CIO is found. Upon the last permanent CIO’s departure in June 2012, the Deputy CIO served as acting CIO until August 2013. Subsequently, three senior officials within OSI took turns serving as CIO, with the first two serving in that role for 4 months each, and the third from April 2014 to January 2015. The most recent former acting CIO noted that she was originally only assigned to serve in the position for 4 months. However, her tenure as acting CIO was extended twice: once in August 2014, when it was extended until December 2014, and again in December 2014, when it was extended until March 2015. Finally, in January 2015, a new interim CIO was appointed when the Librarian detailed the Director of the Office of Public Records and Repositories at the Copyright Office to that position until a permanent CIO is appointed. According to the official who served as Deputy Librarian from June 2012 until December 2014, he did not advocate for hiring a CIO during his tenure for two reasons. First, he stated that the Library needed to develop an IT strategic plan before appointing a permanent CIO in order to provide that individual with priorities. Second, the former Deputy Librarian explained that he did not want to hire a CIO to oversee, among other things, the IT activities performed by CRS and the Copyright Office, when he had not been empowered by the Librarian with the authority to manage these offices’ IT activities. In the absence of a CIO, the former Deputy Librarian managed many of the Library’s recent IT efforts. For example, as previously mentioned, the former Deputy Librarian (1) drafted an IT strategic plan, (2) chaired the Web Governance Board, and (3) led the Library’s efforts to consolidate mobile phone contracts and geospatial information systems. However, in December 2014, the Librarian reassigned the individual serving as Deputy Librarian to be a senior advisor to the Librarian. Subsequently, in January 2015, the Librarian appointed the Law Librarian to be the new Deputy Librarian. After we shared the preliminary results of our review with the Library in January 2015, the Librarian announced plans to hire a permanent CIO and deputy CIO; according to the Chief of Staff, the Library plans to do so by September 2015. While this search is conducted, the interim CIO will be responsible for drafting the Library’s IT strategic plan, chairing the Web Governance Board, and leading the Library’s efforts to consolidate mobile phone contracts and geospatial information systems. Although appointing a permanent CIO could potentially address the Library’s gap in IT leadership, the details of this position have yet to be fully defined. If the Library hires a permanent CIO with responsibility for IT, sufficient authority, and clearly defined responsibilities, it will be better positioned to effectively acquire, operate, and maintain its IT in support of its mission. As information is increasingly created, shared, and preserved digitally, effectively managing its IT resources will be even more critical for the Library to carry out its mission of preserving and making available the knowledge and creative output of the American people. To its credit, although not required to do so, the Library has embraced standards and practices set forth in laws that require executive branch agencies to develop processes for investment management, information security, and privacy. However, widespread weaknesses in implementing these processes and several other IT management disciplines call into question whether the Library is well positioned to meet the challenges of making the most efficient and productive use of its technology resources. Just as important, the Library’s lack of strong, consistent leadership in these areas has hampered its ability to make needed improvements in the face of long-standing challenges. Specifically, without an up-to-date IT strategic plan that is linked to the overall agency strategic plan and includes goals, performance measures, strategies, and interdependencies among projects, the Library will lack a clear definition of what it wants to accomplish with IT and strategies for achieving those results. This challenge is compounded by the lack of a complete and reliable enterprise architecture that accurately captures the Library’s current IT environment, describes its target environment, and outlines a strategy for transitioning from one to the other. Additionally, the Library will be hindered in carrying out an IT strategy without an organization-wide assessment of its human capital needs, and plans for addressing any gaps. Further impeding the Library’s ability to make strategic decisions is an incompletely implemented process for managing the selection and oversight of its IT investments. Specifically, the lack of clearly defined roles and responsibilities and other gaps in policies have resulted in an inconsistent approach to reviewing and selecting investments for the Library’s portfolio. As a result, there is less assurance that proposed investments are receiving adequate scrutiny and that the Library is expending its resources on the appropriate mix of systems that will effectively and efficiently meet its needs. Moreover, by not applying adequate oversight to investments that have already been selected, the Library is not in a position to ensure that they are meeting cost, schedule, and performance goals and delivering the capabilities the agency needs to carry out its mission. More basically, because the Library does not have accurate data on what it spends on IT each year or an accurate inventory of IT assets, it is limited in its ability to make informed investment decisions or ensure that it does not waste money on IT. Concerns about the Library’s ability to acquire IT systems that meet its needs are further raised by the absence of organization-wide policies to ensure that its systems acquisition process follows disciplined practices in the areas of risk management, requirements development, cost estimating, and schedule development. The lack of such policies has led to the incomplete implementation of these practices among the investments we reviewed. Without following such key practices, the Library will be challenged in ensuring that systems are delivered on time and within budget and that they deliver the capabilities needed by its users. Another significant area of concern is the Library’s inconsistent implementation of agency-wide information security and privacy programs. While, to its credit, the Library has established roles and responsibilities and policies and procedures for information security and privacy, significant weaknesses in implementing key security management controls call into serious question whether the information and systems at the Library are being adequately protected. These weaknesses, in areas such as documenting security controls, conducting security testing, developing remedial action plans, establishing contingency plans, carrying out security training, ensuring that contracts address security requirements, and assessing risks to privacy, could provide opportunities for either intentional or inadvertent compromise of the Library’s systems, resulting in unauthorized access, modification, or loss of sensitive information, or disruption to the Library’s operations. These issues are further highlighted by a number of weaknesses we found in technical security controls that are intended to limit unauthorized access to the Library’s systems and ensure their integrity. While ITS—as the central IT organization within the Library—is responsible for providing IT-related services to the Library’s other units, the lack of satisfaction with these services has contributed to the other units pursuing their own IT activities, potentially resulting in duplicative investments and wasted resources. Although the reasons units provided for managing much of their IT independently are understandable given inconsistent satisfaction with the services provided by ITS, allowing service units to do so likely increases costs and inefficiencies. Without a plan for improving the units’ satisfaction with ITS services and an organization-wide evaluation of the costs and benefits of the Library’s fragmented approach to carrying out IT activities, the agency may be missing opportunities to eliminate duplication, improve the efficiency of its delivery of IT services, and save taxpayer dollars. Key to all these shortcomings, the Library has lacked consistent, effective leadership for its IT efforts. Because the Library’s CIO position lacks adequate authority and oversight, the agency has diminished assurance that investments in IT are being coordinated organization-wide and that they provide an appropriate mix of capabilities that support the Library’s mission while avoiding unnecessary duplication. The Library’s intention to appoint a permanent CIO is a positive development, but it will be important to clearly define this position and ensure that this official has sufficient authority to address the many challenges facing the Library’s IT management. If it follows through on plans to appoint such an official and invests the position with the appropriate authority, the Library will be in a stronger position to address the IT management challenges we have identified and make a more effective and efficient use of technology to support its mission. To provide stable, consistent, and effective leadership for addressing the weaknesses identified in this report, as well as for improving the organization’s management of IT, we recommend that the Librarian expeditiously hire a permanent chief information officer responsible for managing the Library’s IT and ensure that this official has clearly defined responsibilities and adequate authority, consistent with the role of a chief information officer as defined by best practices. This should include, among other things, (1) responsibility for commodity IT; (2) oversight of mission-specific systems, through the ITSC or another oversight mechanism; and (3) clarification of responsibilities and authorities between the Library CIO and service unit IT leadership. To provide strategic direction for the Library’s use of its IT resources, we recommend that the Librarian of Congress take the following 3 actions: Complete an IT strategic plan within the time frame the Library has established for doing so. The plan, at a minimum, should (1) align with the agency’s overall strategic plan, (2) provide results-oriented goals and performance measures, (3) identify the strategies for achieving the desired results, and (4) describe interdependencies among projects. Establish a time frame for developing a complete and reliable enterprise architecture that accurately captures the Library’s current IT environment, describes its target environment, and outlines a strategy for transitioning from one to the other, and develop the architecture within the established time frame. Establish a time frame for implementing a Library-wide assessment of IT human capital needs and complete the assessment within the established time frame. This assessment should, at a minimum, analyze any gaps between current skills and future needs, and include a strategy for closing any identified gaps. To provide a framework for effective IT investment management and ensure that the Library has accurate information to support its decisions, we recommend that the Librarian take the following 10 actions: Clarify investment management policy to identify which governance bodies are responsible for making investment decisions, and under what conditions. Establish and implement a process for linking IT strategic planning, enterprise architecture, and IT investment management. Establish and implement policies and procedures for reselecting investments that are already operational. Establish and implement policies and procedures for ensuring that investment selection decisions have an impact on decisions to fund investments. Ensure that appropriate governance bodies review all investments that meet defined criteria. Require investments in development to submit complete investment data (i.e., cost and schedule variances and risk management data) in quarterly reports submitted to the ITSC. Fully establish and implement policies, to include guidance for service units on classifying expenditures as IT, for maintaining a full accounting of the Library’s IT-related expenditures. Fully establish and implement policies for developing a comprehensive inventory of IT assets. Implement policies and procedures for conducting post- implementation reviews of investments. Fully establish and implement policies and procedures consistent with the key practices on portfolio management, including (1) defining the portfolio criteria, (2) creating the portfolio, and (3) evaluating the portfolio. To effectively plan and manage its acquisitions of IT systems and increase the likelihood of delivering promised system capabilities on time and within budget, we recommend that the Librarian take the following 4 actions: Complete and implement an organization-wide policy for risk management that includes key practices as discussed in this report, and within the time frame the Library established for doing so. Establish and implement an organization-wide policy for requirements development that includes key practices as discussed in this report. Establish and implement an organization-wide policy for developing cost estimates that includes key practices as discussed in this report. Establish a time frame for finalizing and implementing an organization-wide policy for developing and maintaining project schedules that includes key practices as discussed in this report, and finalize and implement the policy within the established time frame. To better protect IT systems and reduce the risk that the information they contain will be compromised, we recommend that the Librarian take the following 10 actions: Develop a complete and accurate inventory of the agency’s information systems. Revise information security policy to require system security plans to describe common controls, and implement the policy. Ensure that all system security plans are complete, including descriptions of how security controls are implemented and justifications for why controls are not applied. Conduct comprehensive and effective security testing for all systems within the time frames called for by Library policy, to include assessing security controls that are inherited from the Library’s information security program. Ensure that remedial action plans for identified security weaknesses are consistently documented, tracked, and completed in a timely manner. Finalize and implement guidance on continuous monitoring to ensure that officials are informed when making authorization decisions about the risks associated with the operations of the Library’s systems. Develop contingency plans for all systems that address key elements. Establish and implement a process for comprehensively identifying and tracking whether all personnel with access to Library systems have taken required security and privacy training. Establish a time frame for finalizing and implementing the Library’s standard contract sections for information security and privacy requirements, and finalize and implement the requirements within that time frame. Require the chief privacy officer to establish and implement a process for reviewing the Library’s privacy program, to include ensuring that privacy impact assessments are conducted for all information systems. To help ensure that services provided by ITS meet the needs of the Library’s service units, we recommend that the Librarian take the following 2 actions: Finalize and implement a Library-wide policy for developing service- level agreements that (1) includes service-level targets for agreements with individual service units and (2) covers services in a way that best meets the need of both ITS and its customers, including individual service units. Document and execute a plan for improving customer satisfaction with ITS services that includes prioritized improvement projects and associated resource requirements, schedules, and measurable goals and outcomes. In addition, to help ensure an efficient and effective allocation of the agency’s IT resources, we recommend that the Librarian take the following action: Conduct a review of the Library’s IT portfolio to identify duplicative or overlapping activities and investments, including those identified in our report, and assess the costs and benefits of consolidating identified IT activities and investments. In a subsequent report with limited distribution, we will also be making a number of recommendations to address weaknesses we identified in technical security controls at the Library. We provided a draft of this product to the Library of Congress for comment. In his written comments, reproduced in appendix II, the Librarian stated that he generally concurred with our recommendations. In this regard, he described ongoing and planned actions to address them, and provided milestones for completing these actions. If effectively implemented, these actions should help address the weaknesses we identified. The Library also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Librarian of Congress, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6253 or willemssenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The House Appropriations Committee report accompanying the fiscal year 2015 legislative branch appropriations bill required GAO to review the Library of Congress’s management of information technology (IT). Our specific objectives for this review were to assess the extent to which the Library of Congress (1) addressed in its strategic planning the IT and related resources required to meet its goals and objectives; (2) established an IT governance structure to manage the selection, control, and evaluation of IT investments; (3) used IT acquisition and development best practices; (4) established programs for ensuring the information security and privacy protection of its information and information systems; (5) used best practices for managing IT services; and (6) has a chief information officer (CIO) with authority to exercise control and oversight of IT management functions. To address our first objective, we reviewed the agency’s overall strategic plan, and evaluated its draft IT-specific strategic plan against key practices for IT strategic planning that we have previously identified.Those best practices include developing an IT strategic plan that aligns with the agency’s overall strategic plan, provides results-oriented goals and performance measures that describes interdependencies within and across projects so that these permit it to determine whether it is succeeding, identifies the strategies it will use to achieve desired results, and can be understood and managed. Additionally, because an enterprise architecture can help an organization determine how it can most effectively execute its IT strategic plan, we evaluated the agency’s enterprise architecture documentation against key practices identified in our enterprise architecture framework to determine the extent to which the Library had established a well-defined enterprise architecture, as well as demonstrated institutional commitment to its architecture. Those practices include developing an architecture that thoroughly describes the current and target states of an organization’s IT systems and business operations and identifies the gaps and specific intermediary steps that the organization plans to take to achieve its target state; developing an organizational policy for enterprise architecture; and establishing an executive committee representing the enterprise that is responsible and accountable for enterprise architecture. Further, because of the importance of sustaining an IT workforce with the necessary skills to execute an agency’s strategic plan, we obtained and reviewed the Library’s human capital plan. We compared this plan to best practices we have identified in human capital management. Those practices include analyzing the gaps between current skills and future needs and developing strategies for filling the gaps. We also interviewed the enterprise architect, architecture review board chair, Director of the Information Technology Services (ITS) directorate, former acting Chief Information Officer (CIO), former Deputy Librarian, and Librarian of Congress to obtain information about the Library’s IT strategic planning activities. In addressing our second objective, we compared agency documentation against critical processes associated with Stages 2 and 3 of GAO’s information technology investment management framework.the framework includes the following key processes: instituting the investment board, providing investment oversight, and capturing investment information. selecting investments that meet business needs,Stage 3 includes the following critical processes: defining the portfolio criteria, creating the portfolio, evaluating the portfolio, and conducting post-implementation reviews. Specifically, we reviewed written policies, procedures, guidance, and other documentation that provided evidence of establishing commitment to critical processes, such as Library of Congress Regulation 1600: Information Resource Management Policy and Responsibilities; the Library of Congress Information Resource Management Plan; the IT Steering Committee charter; and guidance and templates for the selection process, development stage oversight, and the post-implementation review process. We also reviewed IT Steering Committee meeting minutes to determine whether the committee was successfully implementing its documented policies and procedures, as well as for evidence of its decision-making processes. In addition, we reviewed data from the system used by Integrated Support Services to track and manage the Library’s assets, including those relating to IT. Additionally, we selected three investments as case studies to determine the extent to which key activities associated with the critical processes were being carried out. To choose these investments, we identified the 16 investments that the IT Steering Committee was overseeing or considering for review as of July 2014. To narrow this list, we excluded investments that (1) were in the planning stages, (2) had been completed, or (3) would be fully deployed prior to the completion of our review. We then selected the one investment that was managed by more than one service unit: Library Services and the Office of Strategic Initiatives’ (OSI) Twitter Research Access investment. We then selected two investments sponsored by service units other than Library Services and OSI to ensure coverage of other service units. These additional two investments were the Office of Support Operations’ (OSO) Facility Asset Management Enterprise (FAME) investment and the Office of the Librarian’s Momentum Upgrade and Migration investment. For these three investments, we reviewed evidence of the implementation of project-level IT investment management processes, including investment concept proposals, investment charters, development stage quarterly reports, budget plans, and an IT Steering Committee scoring worksheet that evaluated risk factors along with the significance of potential benefits. Further, we conducted interviews with officials responsible for managing the selected investments, including the Library’s investment management portfolio officer, former acting IT Steering Committee chair, and former acting CIO. We did not assess progress in establishing the capabilities found in Stages 4 and 5 because the Library has not yet implemented Stage 3 processes. In addition, because the Library had not established and implemented a process for tracking IT spending, we developed an estimate of how much it spent on IT during fiscal year 2014 using data from the Library’s accounting and human resources systems. With respect to IT equipment and services captured in the Library’s accounting system, we identified budget object class codes (i.e., codes used by the Library to classify spending) associated with IT. To do so, we performed the following three steps: First, we asked the Office of the Chief Financial Officer to identify budget object classification codes that, based on the executive branch definition of IT, were associated with IT. The Library identified 16 codes associated with IT. Second, we identified budget object classification codes that are consistent with the Technical Reference Model in OMB’s Federal Enterprise Architecture Reference Model.26 codes. Third, we shared the additional 26 budget object classification codes with the Library Office of the Chief Financial Officer, the National Library Service for the Blind and Physically Handicapped (NLS), and ITS to review, comment, and provide additional information. Based on their comments, we removed 16 codes from our review and added 1 code. As a result, we identified 27 budget object classification codes that were associated with IT, 4 of which were associated with both IT and non-IT spending. We then asked the Library to provide us with detailed information for all obligations it made in fiscal year 2014 that were associated with these codes. For the 4 codes that were used to classify both IT and non-IT spending, we identified the obligations classified under these codes that were greater than $2,500. For these selected obligations, we asked the service units to identify, based on the executive branch definition of IT, obligations associated with IT. We then added these obligations to those associated with the other 23 codes to complete our estimate of the Library’s IT equipment and services. Regarding the data in the Library’s human resource system, we obtained from the Library’s Human Resources division (1) the number of Library staff employed under IT-related job series during fiscal year 2014 and (2) the salary information, in aggregate form, for those employees during that fiscal year. We then added this information to our estimate of the Library’s IT equipment and services. We used this combined figure as our estimate of the Library’s IT spending for fiscal year 2014. We then shared our estimate with each service unit to review, comment, and provide additional information. To determine the reliability of the IT spending data, we reviewed Office of the Chief Financial Officer documentation and previous Office of the Inspector General Reports on the Library’s financial statements, and interviewed Office of the Chief Financial Officer officials familiar with the financial system to understand the controls used on to create and classify obligations. We determined that the data were sufficiently reliable for our purpose, which was to provide an estimate of the Library’s IT spending; however, the estimate does not reflect all of the Library’s IT spending. For example, the Library has not defined IT and has not fully established guidance on how to classify IT expenses in its financial accounting system, Momentum. Although Library guidance identifies 5 budget object classification codes as being associated with IT, as noted, we identified additional codes that are used for IT transactions. Additionally, the Library did not ensure that all IT-related transactions were properly associated with IT-related codes. For example, OSI associated about $2.5 million of its IT budget with a code that, according to Library guidance, excludes IT spending. Further, as discussed previously, our estimate does not reflect obligations of $2,500 or less that are associated with 4 budget object classification codes for which the Library made both IT and non-IT obligations. In addition, our estimate does not include salary information for all staff that perform key IT activities. In response to our request for the salary information for all staff whose primary job responsibility is IT, the Assistant Director of Human Resources Services provided information on employees whose job title related to the information technology management series (2210). However, a Copyright budget analyst and the Library’s Chief Financial Officer stated that the Library has employees that perform key IT activities, but whose job titles fall outside of the information technology management series. To determine the reliability of the cost estimates for the investments reviewed by the Library’s IT Steering Committee, we (1) performed testing for obvious errors in accuracy and completeness, and (2) interviewed officials knowledgeable about the template used to produce the estimates. Additionally, as discussed in more detail below, we also assessed the extent to which the estimates were created using leading practices consistent with a comprehensive estimate, as identified in GAO’s Cost Estimating and Assessment Guide. However, none of the investments’ estimates fully met the comprehensive characteristic. Despite this limitation, we believe that the cost data are sufficiently reliable for our purpose—that is, as an indicator of the general range of the portion of the Library’s IT spending that is reviewed by the ITSC. To address the third objective, we compared Library policies and procedures in key IT acquisition management areas—risk management, requirements development, cost estimating, and scheduling—to leading practices identified by industry and GAO. We also determined the extent to which the three selected investments identified above were implementing these key IT acquisition practices. Specifically, with respect to risk management and requirements development, we reviewed policies and procedures developed by ITS, as well as acquisition documentation from the three selected investments, and compared them to risk management and requirements development best practices identified by the Software Engineering Institute’s (SEI) Capability Maturity Model® Integration for Acquisition (CMMI-ACQ). The key risk management practices were developing a risk management strategy; identifying and documenting risks; evaluating, categorizing, and prioritizing risks; developing risk mitigation plans; and monitoring the status of each risk periodically, and implementing the risk mitigation plans as appropriate. The key requirements development practices were eliciting stakeholder needs, developing customer requirements, and prioritizing customer requirements. We analyzed investment risk documentation, including risks identified in investment charters, acquisition plans, and risk registers; risk mitigation plans; and quarterly performance reports submitted to the IT Steering Committee. Additionally, we assessed investment requirements development documentation, such as requirements obtained from customers and other stakeholders, and a system gap analysis. Further, we interviewed officials responsible for managing the investments to obtain additional information about their risks, requirements, and practices for managing them. We shared our analysis with Library officials to review, comment, and provide additional information, and we adjusted our analysis where appropriate. With regard to cost estimating, we reviewed policies and procedures developed by ITS, as well as cost estimating documentation from the three selected investments, and compared them to leading practices set This guide forth in GAO’s Cost Estimating and Assessment Guide.identifies 12 leading practices that represent work across the federal government and are the basis for a high-quality, reliable cost estimate. An estimate created using the leading practices exhibits four broad characteristics: it is accurate, well documented, credible, and comprehensive. Each of these characteristics is associated with a specific set of leading practices, which in turn are made up of a number of specific tasks. We assessed ITS’s guidance against each of the four characteristics. Each characteristic was assessed as either being fully met—the Library provided complete evidence that satisfies the associated tasks of the leading practices; substantially met—the Library provided evidence that satisfies a large portion of the associated tasks of the leading practices; partially met—the Library provided evidence that satisfies about half of the associated tasks of the leading practices; minimally met—the Library provided evidence that satisfies a small portion of the associated tasks of the leading practices; or not met—the Library did not provide evidence that satisfies any of the associated tasks of the leading practices. In assessing the reliability of the estimates developed by the three selected investments, we only assessed practices associated with the comprehensive characteristic. We did so because none of the investments’ estimates fully met the comprehensive characteristic, and this characteristic must be completed in order for the estimate to fully address the other three characteristics. We assessed these estimates using the same scoring methodology (i.e., fully met, substantially met, partially met, minimally met, and not met) as described above for the review of ITS’s cost estimating policies and procedures. We shared our analysis with Library officials to review, comment, and provide additional information. Finally, regarding our assessment of the Library’s scheduling, we reviewed policies and procedures developed by ITS, as well as scheduling documentation from the selected investments, and compared them to leading practices set forth in the exposure draft of GAO’s This guide defines 10 leading practices Schedule Assessment Guide.that are vital to having integrated and reliable master schedules. Similar to a well-developed cost estimate, a schedule created using the leading practices exhibits four broad characteristics: it is comprehensive, well- constructed, credible, and controlled. Each characteristic is associated with a specific set of leading practices, which, in turn, are made up of a number of specific tasks. We assessed ITS’s guidance against each of the four characteristics. In assessing the reliability of the schedules developed by the selected investments, we only assessed practices associated with the well- constructed characteristic. We did so because none of the schedules substantially addressed the practices associated with this characteristic, and because this characteristic relates to the foundational practices for a high-quality, reliable schedule. We assessed ITS’s policies and procedures, as well as the investment schedules using the same methodology (i.e., fully met, substantially met, partially met, minimally met, and not met) as previously described for our assessment of ITS’s cost estimating policies and procedures. We shared our analysis with Library officials to review, comment, and provide additional information, and we adjusted our analysis where appropriate. To address our fourth objective, we reviewed relevant information security and privacy laws and guidance, including National Institute of Standards and Technology (NIST) standards and guidance, to identify federal security and privacy control guidelines. We then reviewed the Library’s security and privacy policies and procedures to determine their consistency with these guidelines. Additionally, we selected nine Library systems as case studies to determine the extent to which NIST guidelines and Library policy were being implemented. We chose these systems by following these six steps: First, using lists of systems developed by the Chief Information Security Officer (CISO), the Copyright Office, and Library Services as the basis for our selected systems, we separated the systems into eight groups—each of the seven service units, as well as NLS. With two exceptions—Law Library and OSI (both of which are discussed later in this section)—we only selected one system from each group. Second, in order to narrow the list of systems, we excluded those with a “low” Federal Information Processing Standards (FIPS) 199 impact level. Because the Law Library only had one system, which was labeled as having a “low” FIPS 199 impact level, we did not select any systems from this service unit. Third, we selected the Library’s “tier 0 systems”—that is, general support systems use to support critical IT systems that need to be restored before any other systems in the event of a disaster. The three tier 0 systems are the ITS Application Hosting Environment, ITS Library of Congress Data Network, and ITS Library of Congress Office Automation System. Fourth, we identified the Library’s other general support systems and selected the Congressional Research Service’s (CRS) general support system that also processes personally identifiable information—the Enterprise Infrastructure General Support System— as well as the Office of Security and Emergency Preparedness (OSEP) general support system that is classified as having a “high” FIPS 199 impact level—the OSEP Physical Security Network. Fifth, for groups without an associated system, we identified the Library’s “tier 1” systems (i.e., systems that are to be restored within 24 hours in the event of a disaster). We identified four systems: Copyright’s eCO system, the Office of the Librarian’s Momentum system, Library Services’ Federal Library and Information Network (FEDLINK) Customer Account Management System, and Library Services’ System Management Information network (SYMIN) II. From these, we selected Copyright’s eCO system and the Office of the Librarian’s Momentum system. For Library Services, we randomly selected SYMIN II from the two systems. Finally, because NLS did not have any general support systems or tier 1 systems, we identified NLS systems with a moderate FIPS 199 impact level and randomly selected the NLS Production Information Control System/NLS Integrated Operations Support System (PICS/NIOSS). In summary, this selection process resulted in the following nine systems: CRS Enterprise Infrastructure General Support System, OSEP Physical Security Network, eCO, OFCO Momentum, SYMIN II, and PICS/NIOSS. ITS Application Hosting Environment, ITS Library of Congress Data Network, ITS Library of Congress Office Automation System, Using NIST guidelines for an effective agency-wide information security program, we evaluated the Library’s information security program in the following areas: Incident handling: We compared the Library’s incident handling procedures to NIST guidance on the key steps that agencies should take when responding to incidents. To determine the effectiveness of the Library’s response to incidents, we selected 22 incidents to review as case studies. To choose the incidents, we obtained a list of all incidents reported between October 1, 2013, and September 2, 2014. In order to narrow the list of incidents, we removed (1) incidents for which the Library determined that the incident did not require investigation or was a false positive and (2) incidents with a status of open or canceled. We then separated the remaining incidents into eight groups—each of the categories that the Library uses to classify incidents. With the exception of one category—recon activity—we randomly selected 3 incidents from each category. For the recon category, we selected its 1 incident for our review. For these selected incidents, we reviewed documents from the Library’s incident tracking system to determine the extent to which the Library had performed analysis, containment, eradication, recovery, reporting, and post- incident procedures in accordance with NIST guidance. To verify the reliability of the data in the agency’s incident handling system, we examined it for obvious outliers, omissions, and errors. We determined that these data were sufficient for our purposes, which was to select incidents to use as case studies and determine the extent to which the Library handled those incidents consistent with NIST guidance. Inventory of systems: We assessed the Library’s policy for its system inventory against relevant NIST guidelines.comprehensiveness and accuracy of the Library’s system inventory, we compared the inventory provided to us by the Library’s CISO with a separate list provided by Library Services. We also asked the CISO and officials from each service unit to verify the accuracy and completeness of these lists. Although we determined the inventory was not complete and accurate, we believe that the system lists collectively, with lists of tier 0, tier 1, and general support systems, are sufficiently reliable for our purpose—that is, to select systems as case studies for our review. System security plans: We compared Library policy on system security plans with relevant NIST guidance.assessed system security plans for the nine selected systems against the NIST guidelines. Security test and evaluation: We assessed Library policy on security testing against relevant NIST guidelines.compared testing documentation for the nine selected systems against the NIST guidance and Library policy. Remedial action plans: We compared Library policy on plans of action and milestones (POA&M) with relevant NIST guidance.Additionally, we reviewed POA&Ms for the nine selected systems and, for eight of the systems, identified the number of POA&M items that were delayed, as of December 2014. Regarding the OSEP Physical Security Network, OSEP had not reported any updates to its POA&M items since September 2013; we identified the number of items that were open as of that date, when the items were originally reported. To verify the reliability of the agency’s POA&M data, we examined them for obvious outliers and errors. Excluding the data for the OSEP Physical Security Network, which had not been updated since September 2013, we determined that the POA&M data were sufficient for our purpose, which was to identify the number of items with a status of “delayed.” Authorization to operate: We assessed Library policy on We also authorization to operate against relevant NIST guidelines.assessed the extent to which the Library completed authorizations to operate for the nine selected systems. In instances where the authorizations had not been completed, we interviewed Library officials responsible for the systems and, where relevant, reviewed documentation in which the Library, for a defined period of time, waived the requirement to authorize the system to operate. Contingency planning: We compared Library policy on contingency planning with relevant NIST guidance. In addition, we determined the extent to which the Library developed contingency plans for the nine selected systems, as called for by NIST guidance and Library policy. Security and privacy awareness training: We assessed Library policy on security and privacy training against relevant NIST guidance. Additionally, we obtained the lists of users identified in three systems: ITS Library of Congress Office Automation System, CRS Enterprise Infrastructure General Support System, and OSEP Physical Security Network. We did so because these were the three systems in our sample for which the Library maintains instances of the Library’s primary service for authenticating and authorizing users. We then compared these lists with the list of users the Library reported as having completed the security and privacy awareness training in fiscal year 2014. We shared our analysis with Library officials to review, comment, and provide additional information. Contract requirements for information security: We compared Library policy on contract requirements for information security with relevant NIST guidance.which the contracts supporting the nine selected systems included the contract requirements called for by Library policy and NIST guidance. In addition, we determined the extent to To evaluate the Library’s controls over its information systems, we used our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. We also used NIST standards and guidelinesprocedures, practices, and standards. Specifically, we reviewed controls in the following areas: Authorization: For all users with elevated privileges to the selected nine systems, we reviewed the extent to which those users had been authorized to use the system with those elevated permissions, consistent with NIST guidance. Identification and authentication: With respect to the selected systems, we assessed controls used to authenticate and authorize users against NIST guidance. Cryptography: We observed configurations for providing secure data transmissions across the network to determine whether sensitive data were being encrypted consistent with NIST guidance. Background investigations: We identified all users with elevated privileges to the selected nine systems and then asked the Library’s personnel security officer whether the Library had performed a background investigation for each, consistent with NIST guidanceand Library policy. Physical security and environmental safety: We identified four Library facilities in the United States that include an IT data center: (1) the James Madison Building on Capitol Hill; (2) the NLS facility in northwest Washington, D.C.; (3) the Packard Campus of the National Audio-Visual Conservation Center in Culpepper, Virginia; and (4) the Library’s alternate computing facility in Manassas, Virginia. We visited each of these facilities and assessed the physical security and environmental controls supporting their data centers against relevant NIST guidance. Additionally, because the Library did not have an accurate inventory of its non-capitalized IT assets, we also visited the Library’s warehouse in Landover, Maryland, and assessed the physical security and environmental controls supporting this facility. To address our fifth objective, we evaluated ITS’s service management documentation against leading industry practices for managing IT services identified in the Information Technology Infrastructure Library. We evaluated the service management practices of ITS, which functions as the Library’s central IT organization and is the primary provider to each service unit throughout the Library. The service management practices were developing a service catalog; defining how service-level agreements (SLA) should be structured so that IT services and customers are covered in a manner best suited to the organization’s needs; and establishing SLAs consistent with that structure that describe the IT services, specify roles and responsibilities of both parties, and document service level targets. Specifically, we reviewed ITS’s service catalog and SLAs between ITS and its customers. We also we conducted interviews with officials responsible for managing ITS’s services, including the Director of ITS, and the ITS Assistant Director for Operations. Lou Hunnebeck and Colin Rudd, ITIL: Service Design © (London: The Stationary Office, 2011). The guide is available at: http://www.axelos.com/Publications-Library/IT-Service- Management-ITIL/. IDEALSM—namely, establishing a written plan that serves as the basis for guiding its improvement activities. Because the Library did not have comprehensive metrics for the satisfaction with ITS’s services, we conducted a web-based survey of ITS customers. We designed a draft questionnaire in close collaboration with our survey specialist. We also conducted pretests with four officials: one official representing the largest service unit (Library Services), one official representing the smallest service unit (Law Library), the Director of ITS, and the former acting CIO. From these pretests, we made revisions as necessary to reduce the likelihood of overall and item non-response as well as reporting errors on our questions. We sent the survey via e-mail to the head of each service unit, as well as the head of NLS, on October 15, 2014. Log-in information was e-mailed to all contacts. We e-mailed those who had not completed the questionnaire at multiple points during the data collection period, and we closed the survey on November 3, 2014. We received a completed questionnaire from each service unit and NLS. Because we surveyed all of the unit heads and therefore did not conduct any sampling for our survey, our data are not subject to sampling errors. However, the practical difficulties of conducting any survey may introduce non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond to a question can introduce errors into the survey results. We included steps in both the data collection and data analysis stages to minimize such non-sampling errors. Our analysts answered respondent questions and resolved difficulties that respondents had in completing our survey. Although the survey responses cannot be used to generalize the opinions and satisfaction of all customers that receive services from ITS, the responses provide data for our defined population. The final questionnaire asked the heads of the service units and NLS to identify the extent to which they are satisfied or dissatisfied with the services provided by ITS.providing satisfactory IT services to its customers, we described the results on a 5-point satisfaction scale, where 5 is “very satisfied” and 1 is “very dissatisfied.” To determine the extent to which ITS is To obtain additional narrative and supporting context from stakeholders, survey respondents were given multiple opportunities to provide additional open-ended comments throughout our survey. Using these open-ended responses, we conducted a content analysis in order to identify common factors. We then totaled the number of times each factor was mentioned by a respondent, choosing to report on the factors that were identified by two or more respondents. Further, in order to determine the extent to which service units performed duplicative or overlapping IT activities, we sent a structured questionnaire to each service unit, as well as NLS.respondent to identify the extent to which they (1) purchased commodity IT in the past 3 years; (2) performed significant IT activities, as defined by the Information Technology Infrastructure Library; and (3) performed IT service desk functions. We also reviewed network diagrams and system security plans for the nine systems we selected as part of our fourth objective. In addition, we reviewed portions of the Library’s hardware and software inventories to determine if it had made duplicative IT investments in selected areas: Monitors: Because the Library did not have an accurate inventory of its non-capitalized IT assets, we visited the Library’s warehouse in Landover, Maryland, and reviewed the facility’s physical and environmental controls. At that facility, we observed that ITS had approximately 400 19-inch monitors purchased in 2008 and about 100 24-inch monitors that were purchased in 2010. Although these monitors were several years old, according to ITS officials, they had never been used and were still in their original packaging. In order to determine whether service units purchased duplicative monitors, we asked each service unit, but not including OSI, to provide an inventory of its monitors. We received inventories from NLS, the Law Library, and Library Services. We then identified 19-inch and 24-inch monitors in these inventories (1) that were of a different model number than those purchased by ITS and (2) for which their respective manuals were copyrighted later than 2008 for the 19-inch monitors and later than 2010 for the 24-inch monitors. Software licenses: We identified software applications that were purchased by more than one service unit and determined the extent to which the Library purchased too many or too few licenses. To select the applications, we used the software inventories developed by ITS and CRS with automated tools for deploying software to workstations that they manage. First, for the ITS inventory, we identified applications that were deployed to two or more service units. Second, in order to ensure that we only selected software purchased by the Library, we removed any applications that were published by the Library itself. Third, using an open source search, we removed applications that can be legally obtained for free. Fourth, we removed applications that were being used for entities outside of the Library. Fifth, we eliminated purchases made by the Library’s Inspector General. Based on these steps, we identified 24 applications. We then compared these 24 applications to CRS’s application inventory to determine whether CRS purchased any of these applications. For these 24 applications, we obtained the relevant license agreements, and compared the number of licenses purchased to the number of licenses deployed throughout the Library. We chose to report on applications where the Library purchased at least 100 more licenses than it had deployed. To verify the reliability of the data on the number of deployed licenses, we examined it for obvious outliers, omissions, and errors, and interviewed officials familiar with the data to gain an understanding of the controls used to create and maintain the data. We determined that these data were sufficient for our purposes, which was to describe the number of Microsoft Visio 2010 Professional licenses the Library deployed. We discussed the duplicative IT activities and investments with officials responsible for managing IT in CRS, the Copyright Office, Library Services, and OSEP. To address our sixth objective, we evaluated the Library’s IT policies and the position description of the Library’s CIO against key practices we identified based on our research on and experience with federal agencies. These practices related to the following areas: Commodity IT: The CIO should have the responsibility and authority, including budgetary and spending control, for commodity IT. Mission-specific systems: The CIO should have the ability to adequately oversee mission-specific systems to ensure that funds being spent on component agency investments will fulfill mission needs. Relationships between CIO and components: The responsibilities and authorities governing the relationship between the CIO and component organizations should be defined. We also compared the tenure of the Library’s recent CIOs to results from our research which found that CIOs and former agency IT executives believed it was necessary for a CIO to stay in office for 3 to 5 years to be effective and 5 to 7 years to fully implement major change initiatives in large public-sector organizations. Further, we interviewed the Chief of Staff, former Deputy Librarian, and Librarian of Congress to obtain information about the Library’s plans for hiring a full-time CIO. We conducted this performance audit from April 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making contributions to this report included Lon Chin, Nick Marinos, and Christopher Warweg (assistant directors), Kaelin Kuhn (analyst-in-charge), Sher’rie Bacon, Chris Businsky, Sa’ar Dagani, Neil Doherty, Torrey Hardee, Thomas Johnson, Abishek Krupanand, Jennifer Leotta, Lee McCracken, David Plocher, Antonio Ramirez, Meredith Raymond, Karen Richey, Kelly Rubin, Kate Sharkey, Andrew Stavisky, Tina Torabi, Kevin Walsh, Shawn Ward, and Charles Youman. | The Library of Congress is the world's largest library, whose mission is to make its resources available and useful to Congress and the American public. In carrying out its mission, the Library increasingly relies on IT systems, particularly in light of the ways that digital technology has changed the way information is created, shared, and preserved. The House Appropriations Committee report accompanying the 2015 legislative branch appropriations bill required GAO to conduct a review of IT management at the Library. GAO's objectives focused on the extent to which the Library has established and implemented key IT practices and requirements in, among other areas: (1) strategic planning, (2) governance and investment management, (3) information security and privacy, (4) service management, and (5) leadership. To carry out its work, GAO reviewed Library regulations, policies, procedures, plans, and other relevant documentation for each area and interviewed key Library officials. The Library of Congress has established policies and procedures for managing its information technology (IT) resources, but significant weaknesses across several areas have hindered their effectiveness: Strategic planning: The Library does not have an IT strategic plan that is aligned with the overall agency strategic plan and establishes goals, measures, and strategies. This leaves the Library without a clear direction for its use of IT. Investment management: Although the Library obligated at least $119 million on IT for fiscal year 2014, it is not effectively managing its investments. To its credit, the Library has established structures for managing IT investments—including a review board and a process for selecting investments. However, the board does not review all key investments, and its roles and responsibilities are not always clearly defined. Additionally, the Library does not have a complete process for tracking its IT spending or an accurate inventory of its assets. For example, while the inventory identifies over 18,000 computers currently in use, officials stated that the Library has fewer than 6,500. Until the Library addresses these weaknesses, its ability to make informed decisions will be impaired. Information security and privacy: The Library assigned roles and responsibilities and developed policies and procedures for securing its information and systems. However, its implementation of key security and privacy management controls was uneven. For example, the Library's system inventory did not include all key systems. Additionally, the Library did not always fully define and test security controls for its systems, remediate weaknesses in a timely manner, and assess the risks to the privacy of personal information in its systems. Such deficiencies also contributed to weaknesses in technical security controls, putting the Library's systems and information at risk of compromise. Service management: The Library's Information Technology Services (ITS) division is primarily responsible for providing IT services to the agency's operating units. While ITS has catalogued these services, it has not fully developed agreements with the other units specifying expected levels of performance. Further, the other units were often not satisfied with these services, which has contributed to them independently pursuing their own IT activities. This in turn has resulted in units purchasing unnecessary hardware and software, maintaining separate e-mail environments, and managing overlapping or duplicative IT activities. Leadership: The Library does not have the leadership needed to address these IT management weaknesses. For example, the agency's chief information officer (CIO) position does not have adequate authority over or oversight of the Library's IT. Additionally, the Library has not had a permanent CIO since 2012 and has had five temporary CIOs in the interim. In January 2015, at the conclusion of GAO's review, officials stated that that the Library plans to draft an IT strategic plan within 90 days and hire a permanent CIO. If it follows through on these plans, the Library will be in a stronger position to address its IT management weaknesses and more effectively support its mission. GAO is recommending that the Library expeditiously hire a permanent CIO. GAO is also making 30 other recommendations to the Library aimed at establishing and implementing key IT management practices. The Library generally agreed with GAO's recommendations and described planned and ongoing actions to address them. |
Dramatic increases in computer interconnectivity, especially in the use of the Internet, are revolutionizing the way our government, our nation, and much of the world communicate and conduct business. The benefits have been enormous. Vast amounts of information are now literally at our fingertips, facilitating research on virtually every topic imaginable; financial and other business transactions can be executed almost instantaneously, often on a 24-hour-a-day basis; and electronic mail, Internet web sites, and computer bulletin boards allow us to communicate quickly and easily with a virtually unlimited number of individuals and groups. In addition to such benefits, however, this widespread interconnectivity poses significant risks to our computer systems and, more important, to the critical operations and infrastructures they support. For example, telecommunications, power distribution, water supply, public health services, and national defense—including the military’s warfighting capability---law enforcement, government services, and emergency services all depend on the security of their computer operations. The speed and accessibility that create the enormous benefits of the computer age likewise, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. Reports of attacks and disruptions abound. The March 2001 report of the “Computer Crime and Security Survey,” conducted by the Computer Security Institute and the Federal Bureau of Investigation’s San Francisco Computer Intrusion Squad, showed that 85 percent of respondents (primarily large corporations and government agencies) had detected computer security breaches within the last 12 months. Disruptions caused by virus attacks, such as the ILOVEYOU virus in May 2000 and 1999’s Melissa virus, have illustrated the potential for damage that such attacks hold. A sampling of reports summarized in Daily Reports by the FBI’s National Infrastructure Protection Center during two recent weeks in March illustrates the problem further: A hacker group by the name of “PoizonB0x” defaced numerous Government officials are increasingly concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, or degrade the integrity of and deny access to data. As greater amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available information technology, the likelihood that information attacks will threaten vital national interests increases. In addition, the disgruntled organization insider is a significant threat, since such individuals often have knowledge that allows them to gain unrestricted access and inflict damage or steal assets without a great deal of knowledge about computer intrusions. Since 1996, our analyses of information security at major federal agencies have shown that federal systems were not being adequately protected from these threats, even though these systems process, store, and transmit enormous amounts of sensitive data and are indispensable to many federal agency operations. In September 1996, we reported that serious weaknesses had been found at 10 of the 15 largest federal agencies, and we concluded that poor information security was a widespread federal problem with potentially devastating consequences. In 1998 and in 2000, we analyzed audit results for 24 of the largest federal agencies: both analyses found that all 24 agencies had significant information security weaknesses. As a result of these analyses, we have identified information security as a high-risk issue in reports to the Congress since 1997—most recently in January 2001. Evaluations published since July 1999 show that federal computer systems are riddled with weaknesses that continue to put critical operations and assets at risk. Significant weaknesses have been identified in each of the 24 agencies covered by our review. These weaknesses covered all six major areas of general controls—the policies, procedures, and technical controls that apply to all or a large segment of an entity’s information systems and help ensure their proper operation. These six areas are (1) security program management, which provides the framework for ensuring that risks are understood and that effective controls are selected and implemented, (2) access controls, which ensure that only authorized individuals can read, alter, or delete data, (3) software development and change controls, which ensure that only authorized software programs are implemented, (4) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection, (5) operating systems controls, which protect sensitive programs that support multiple applications from tampering and misuse, and (6) service continuity, which ensures that computer-dependent operations experience no significant disruptions. Weaknesses in these areas placed a broad range of critical operations and assets at risk for fraud, misuse, and disruption. In addition, they placed an enormous amount of highly sensitive data—much of it pertaining to individual taxpayers and beneficiaries—at risk of inappropriate disclosure. The scope of audit work performed has continued to expand to more fully cover all six major areas of general controls at each agency. Not surprisingly, this has led to the identification of additional areas of weakness at some agencies. While these increases in reported weaknesses are disturbing, they do not necessarily mean that information security at federal agencies is getting worse. They more likely indicate that information security weaknesses are becoming more fully understood—an important step toward addressing the overall problem. Nevertheless, our analysis leaves no doubt that serious, pervasive weaknesses persist. As auditors increase their proficiency and the body of audit evidence expands, it is probable that additional significant deficiencies will be identified. Most of the audits covered in our analysis were performed as part of financial statement audits. At some agencies with primarily financial missions, such as the Department of the Treasury and the Social Security Administration, these audits covered the bulk of mission-related operations. However, at agencies whose missions are primarily nonfinancial, such as the Departments of Defense and Justice, the audits may provide a less complete picture of the agency’s overall security posture because the audit objectives focused on the financial statements and did not include evaluations of systems supporting nonfinancial operations. In response to congressional interest, during fiscal years 1999 and 2000, we expanded our audit focus to cover a wider range of nonfinancial operations. We expect this trend to continue. To fully understand the significance of the weaknesses we identified, it is necessary to link them to the risks they present to federal operations and assets. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, the degree of risk caused by security weaknesses is extremely high. The weaknesses identified place a broad array of federal operations and assets at risk of fraud, misuse, and disruption. For example, weaknesses at the Department of the Treasury increase the risk of fraud associated with billions of dollars of federal payments and collections, and weaknesses at the Department of Defense increase the vulnerability of various military operations. Further, information security weaknesses place enormous amounts of confidential data, ranging from personal and tax data to proprietary business information, at risk of inappropriate disclosure. For example, in 1999, a Social Security Administration employee pled guilty to unauthorized access to the administration’s systems. The related investigation determined that the employee had made many unauthorized queries, including obtaining earnings information for members of the local business community. Such risks, if inadequately addressed, may limit government’s ability to take advantage of new technology and improve federal services through electronic means. For example, this past February, we reported on serious control weaknesses in the Internal Revenue Service’s (IRS) electronic filing system, noting that failure to maintain adequate security could erode public confidence in electronic filing, jeopardize the Service’s ability to meet its goal of 80 percent of returns being filed electronically by 2007, and deprive it of financial and other anticipated benefits. Specifically, we found that, during the 2000 tax filing season, IRS did not adequately secure access to its electronic filing systems or to the electronically transmitted tax return data those systems contained. We demonstrated that unauthorized individuals, both internal and external to IRS, could have gained access to these systems and viewed, copied, modified, or deleted taxpayer data. In addition, the weaknesses we identified jeopardized the security of the sensitive business, financial, and taxpayer data on other critical IRS systems that were connected to the electonic filing systems. The IRS Commissioner has stated that, in response to recommendations we made, IRS has completed corrective action for all of the critical access control vulnerabilities we identified and that, as a result, the electronic filing systems now satisfactorily meet critical federal security requirements to protect the taxpayer. As part of our audit follow up activities, we plan to evaluate the effectiveness of IRS’s corrective actions. I would now like to describe the risks associated with specific recent audit findings at agencies of particular interest to this subcommittee. Information technology is essential to the Department of Energy’s (DOE) scientific research mission, which is supported by a large and diverse set of computing systems, including very powerful supercomputers located at DOE laboratories across the nation. In June 2000, we reported that computer systems at DOE laboratories supporting civilian research had become a popular target of the hacker community, with the result that the threat of attacks had grown dramatically in recent years. Further, because of security breaches, several laboratories had been forced to temporarily disconnect their networks from the Internet, disrupting the laboratories’ ability to do scientific research for up to a full week on at least two occasions. In February 2001, the DOE’s Inspector General reported network vulnerabilities and access control weaknesses in unclassified systems that increased the risk that malicious destruction or alteration of data or the processing of unauthorized operations could occur. In February, the Department of Health and Human Services’ Inspector General again reported serious control weaknesses affecting the integrity, confidentiality, and availability of data maintained by the department.Most significant were weaknesses associated with the department’s Health Care Financing Administration, which was responsible, during fiscal year 2000, for processing more than $200 billion in medicare expenditures. HCFA relies on extensive data processing operations at its central office to maintain administrative data, such as Medicare enrollment, eligibility, and paid claims data, and to process all payments for managed care. HCFA also relies on Medicare contractors, who use multiple shared systems to collect and process personal health, financial, and medical data associated with Medicare claims. Significant weaknesses were also reported for the Food and Drug Administration and the department’s Division of Financial Operations. The Environmental Protection Agency (EPA) relies on its computer systems to collect and maintain a wealth of environmental data under various statutory and regulatory requirements. EPA makes much of its information available to the public through Internet access in order to encourage public awareness of and participation in managing human health and environmental risks and to meet statutory requirements. EPA also maintains confidential data from private businesses, data of varying sensitivity on human health and environmental risks, financial and contract data, and personal information on its employees. Consequently, EPA’s information security program must accommodate the often competing goals of making much of its environmental information widely accessible while maintaining data integrity, availability, and appropriate confidentiality. In July 2000, we reported serious and pervasive problems that essentially rendered EPA’s agencywide information security program ineffective. Our tests of computer-based controls concluded that the computer operating systems and agencywide computer network that support most of EPA’s mission-related and financial operations were riddled with security weaknesses. In addition, EPA’s records showed that its vulnerabilities had been exploited by both external and internal sources, as illustrated by the following examples. In June 1998, EPA was notified that one of its computers was used by a remote intruder as a means of gaining unauthorized access to a state university’s computers. The problem report stated that vendor- supplied software updates were available to correct the vulnerability, but EPA had not installed them. In July 1999, a chat room was set up on a network server at one of EPA’s regional financial management centers for hackers to post notes and, in effect, conduct on-line electronic conversations. In February 1999, a sophisticated penetration affected three of EPA’s computers. EPA was unaware of this penetration until notified by the FBI. In June 1999, an intruder penetrated an Internet web server at EPA’s National Computer Center by exploiting a control weakness specifically identified by EPA about 3 years earlier during a previous penetration of a different system. The vulnerability continued to exist because EPA had not implemented vendor software updates (patches), some of which had been available since 1996. - On two occasions during 1998, extraordinarily large volumes of network traffic—synonymous with a commonly used denial-of-service hacker technique—affected computers at one of EPA’s field offices. In one case, an Internet user significantly slowed EPA’s network activity and interrupted network service for over 450 EPA computer users. In a second case, an intruder used EPA computers to successfully launch a denial-of-service attack against an Internet service provider. In September 1999, an individual gained access to an EPA computer and altered the computer’s access controls, thereby blocking authorized EPA employees from accessing files. This individual was no longer officially affiliated with EPA at the time of the intrusion, indicating a serious weakness in EPA’s process for applying changes in personnel status to computer accounts. Of particular concern was that many of the most serious weaknesses we identified—those related to inadequate protection from intrusions through the Internet and poor security planning—had been previously reported to EPA management in 1997 by EPA’s inspector general. The negative effects of such weaknesses are illustrated by EPA’s own records, which show several serious computer security incidents since early 1998 that have resulted in damage and disruption to agency operations. As a result of these weaknesses, EPA’s computer systems and the operations that rely on them were highly vulnerable to tampering, disruption, and misuse from both internal and external sources. EPA management has developed and begun to implement a detailed action plan to address reported weaknesses. However, the agency does not expect to complete these corrective actions until 2002 and continued to report a material weakness in this area in its fiscal year 2000 report on internal controls under the Federal Managers’ Financial Integrity Act of 1982. The Department of Commerce is responsible for systems that the department has designated as critical for national security, national economic security, and public health and safety. Its member bureaus include the National Oceanic and Atmospheric Administration, the Patent and Trademark Office, the Bureau of the Census, and the International Trade Administration. During December 2000 and January 2001, Commerce ‘s inspector general reported significant computer security weaknesses in several of the department’s bureaus and, last month, reported multiple material information security weaknesses affecting the department’s ability to produce accurate data for financial statements. These included a lack of formal, current security plans and weaknesses in controls over access to systems and over software development and changes. At the request of the full committee, we are currently evaluating information security controls at selected other Commerce bureaus. The nature of agency operations and their related risks vary. However, striking similarities remain in the specific types of general control weaknesses reported and in their serious negative impact on an agency’s ability to ensure the integrity, availability, and appropriate confidentiality of its computerized operations—and therefore on what corrective actions they must take. The sections that follow describe the six areas of general controls and the specific weaknesses that were most widespread at the agencies covered by our analysis. Each organization needs a set of management procedures and an organizational framework for identifying and assessing risks, deciding what policies and controls are needed, periodically evaluating the effectiveness of these policies and controls, and acting to address any identified weaknesses. These are the fundamental activities that allow an organization to manage its information security risks cost effectively, rather than react to individual problems in an ad-hoc manner only after a violation has been detected or an audit finding reported. Despite the importance of this aspect of an information security program, poor security program management continues to be a widespread problem. Virtually all of the agencies for which this aspect of security was reviewed had deficiencies. Specifically, many had not developed security plans for major systems based on risk, had not documented security policies, and had not implemented a program for testing and evaluating the effectiveness of the controls they relied on. As a result, agencies were not fully aware of the information security risks to their operations, had accepted an unknown level of risk by default rather than consciously deciding what level of risk was tolerable, had a false sense of security because they were relying on controls that were not effective, and could not make informed judgments as to whether they were spending too little or too much of their resources on security. With the October 2000 enactment of the government information security reform provisions of the fiscal year 2001 National Defense Authorization Act, agencies are now required by law to adopt the practices described above, including annual management evaluations of agency security. Access controls limit or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting these resources against unauthorized modification, loss, and disclosure. Access controls include physical protections—such as gates and guards—as well as logical controls, which are controls built into software that require users to authenticate themselves through the use of secret passwords or other identifiers and limit the files and other resources that an authenticated user can access and the actions that he or she can execute. Without adequate access controls, unauthorized individuals, including outside intruders and terminated employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or personal gain. Even authorized users can unintentionally modify or delete data or execute changes that are outside their span of authority. For access controls to be effective, they must be properly implemented and maintained. First, an organization must analyze the responsibilities of individual computer users to determine what type of access (e.g., read, modify, delete) they need to fulfill their responsibilities. Then, specific control techniques, such as specialized access control software, must be implemented to restrict access to these authorized functions. Such software can be used to limit a user’s activities associated with specific systems or files and to keep records of individual users’ actions on the computer. Finally, access authorizations and related controls must be maintained and adjusted on an ongoing basis to accommodate new and terminated employees, and changes in users’ responsibilities and related access needs. Significant access control weaknesses were reported for all of the agencies covered by our analysis, as evidenced by the following examples: Accounts and passwords for individuals no longer associated with the agency were not deleted or disabled; neither were they adjusted for those whose responsibilities, and thus need to access certain files, changed. At one agency, as a result, former employees and contractors could and in many cases did still read, modify, copy, or delete data. At this same agency, even after 160 days of inactivity, 7,500 out of 30,000 users’ accounts had not been deactivated. Users were not required to periodically change their passwords. Managers did not precisely identify and document access needs for individual users or groups of users. Instead, they provided overly broad access privileges to very large groups of users. As a result, far more individuals than necessary had the ability to browse and, sometimes, modify or delete sensitive or critical information. At one agency, all 1,100 users were granted access to sensitive system directories and settings. At another agency, 20,000 users had been provided access to one system without written authorization. Use of default, easily guessed, and unencrypted passwords significantly increased the risk of unauthorized access. During testing at one agency, we were able to guess many passwords based on our knowledge of commonly used passwords and were able to observe computer users’ keying in passwords and then use those passwords to obtain “high level” system administration privileges. Software access controls were improperly implemented, resulting in unintended access or gaps in access-control coverage. At one agency data center, all users, including programmers and computer operators, had the capability to read sensitive production data, increasing the risk that such sensitive information could be disclosed to unauthorized individuals. Also at this agency, certain users had the unrestricted ability to transfer system files across the network, increasing the risk that unauthorized individuals could gain access to the sensitive data or programs. To illustrate the risks associated with poor authentication and access controls, in recent years we have begun to incorporate network vulnerability testing into our audits of information security. Such tests involve attempting—with agency cooperation—to gain unauthorized access to sensitive files and data by searching for ways to circumvent existing controls, often from remote locations. Our auditors have been successful, in almost every test, in readily gaining unauthorized access that would allow intruders to read, modify, or delete data for whatever purpose they had in mind. Further, user activity was inadequately monitored. At one agency, much of the activity associated with our intrusion testing was not recognized and recorded, and the problem reports that were recorded did not recognize the magnitude of our activity or the severity of the security breaches we initiated. Application software development and change controls prevent unauthorized software programs or modifications to programs from being implemented. Key aspects of such controls are ensuring that (1) software changes are properly authorized by the managers responsible for the agency program or operations that the application supports, (2) new and modified software programs are tested and approved prior to their implementation, and (3) approved software programs are maintained in carefully controlled libraries to protect them from unauthorized changes and to ensure that different versions are not misidentified. Such controls can prevent both errors in software programming as well as malicious efforts to insert unauthorized computer program code. Without adequate controls, incompletely tested or unapproved software can result in erroneous data processing that, depending on the application, could lead to losses or faulty outcomes. In addition, individuals could surreptitiously modify software programs to include processing steps or features that could later be exploited for personal gain or sabotage. Weaknesses in software program change controls were identified for almost all of the agencies where such controls were evaluated. Examples of weaknesses in this area included the following: Testing procedures were undisciplined and did not ensure that implemented software operated as intended. For example, at one agency, senior officials authorized some systems for processing without testing access controls to ensure that they had been implemented and were operating effectively. At another, documentation was not retained to demonstrate user testing and acceptance. Implementation procedures did not ensure that only authorized software was used. In particular, procedures did not ensure that emergency changes were subsequently tested and formally approved for continued use and that implementation of “locally developed” (unauthorized) software programs was prevented or detected. Agencies’ policies and procedures frequently did not address the maintenance and protection of program libraries. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby conduct unauthorized actions or gain unauthorized access to assets or records without detection. For example, one computer programmer should not be allowed to independently write, test, and approve program changes. Although segregation of duties alone will not ensure that only authorized activities occur, inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. For example, an individual who was independently responsible for authorizing, processing, and reviewing payroll transactions could inappropriately increase payments to selected individuals without detection; or a computer programmer responsible for authorizing, writing, testing, and distributing program modifications could either inadvertently or deliberately implement computer programs that did not process transactions in accordance with management’s policies or that included malicious code. Controls to ensure appropriate segregation of duties consist mainly of documenting, communicating, and enforcing policies on group and individual responsibilities. Enforcement can be accomplished by a combination of physical and logical access controls and by effective supervisory review. Segregation of duties weaknesses were identified at most of the agencies covered by our analysis. Common problems involved computer programmers and operators who were authorized to perform a variety of duties, thus providing them the ability to independently modify, circumvent, and disable system security features. For example, at one data center, a single individual could independently develop, test, review, and approve software changes for implementation. Segregation of duties problems were also identified related to transaction processing. For example, at one agency, 11 staff members involved with procurement had system access privileges that allowed them to individually request, approve, and record the receipt of purchased items. In addition, 9 of the 11 had system access privileges that allowed them to edit the vendor file, which could result in fictitious vendors being added to the file for fraudulent purposes. For fiscal year 1999, we identified 60 purchases, totaling about $300,000, that were requested, approved, and receipt-recorded by the same individual. Operating system software controls limit and monitor access to the powerful programs and sensitive files associated with the computer systems operation. Generally, one set of system software is used to support and control a variety of applications that may run on the same computer hardware. System software helps control and coordinate the input, processing, output, and data storage associated with all of the applications that run on the system. Some system software can change data and program code on files without leaving an audit trail or can be used to modify or delete audit trails. Examples of system software include the operating system, system utilities, program library systems, file maintenance software, security software, data communications systems, and database management systems. Controls over access to and modification of system software are essential in providing reasonable assurance that operating system-based security controls are not compromised and that the system will not be impaired. If controls in this area are inadequate, unauthorized individuals might use system software to circumvent security controls to read, modify, or delete critical or sensitive information and programs. Also, authorized users of the system may gain unauthorized privileges to conduct unauthorized actions or to circumvent edits and other controls built into application programs. Such weaknesses seriously diminish the reliability of information produced by all of the applications supported by the computer system and increase the risk of fraud, sabotage, and inappropriate disclosure. Further, system software programmers are often more technically proficient than other data processing personnel and, thus, have a greater ability to perform unauthorized actions if controls in this area are weak. The control concerns for system software are similar to the access control issues and software program change control issues discussed earlier. However, because of the high level of risk associated with system software activities, most entities have a separate set of control procedures that apply to them. Weaknesses were identified at each of the agencies for which operating system controls were reviewed. A common type of problem reported was insufficiently restricted access that made it possible for knowledgeable individuals to disable or circumvent controls in a variety of ways. For example, at one agency, system support personnel had the ability to change data in the system audit log. As a result, they could have engaged in a wide array of inappropriate and unauthorized activity and could have subsequently deleted related segments of the audit log, thus diminishing the likelihood that their actions would be detected. Further, pervasive vulnerabilities in network configuration exposed agency systems to attack. These vulnerabilities stemmed from agencies’ failure to (1) install and maintain effective perimeter security, such as firewalls and screening routers, (2) implement current software patches, and (3) protect against commonly known methods of attack. Finally, service continuity controls ensure that when unexpected events occur, critical operations will continue without undue interruption and that crucial, sensitive data are protected. For this reason, an agency should have (1) procedures in place to protect information resources and minimize the risk of unplanned interruptions and (2) a plan to recover critical operations, should interruptions occur. These plans should consider the activities performed at general support facilities, such as data processing centers, as well as the activities performed by users of specific applications. To determine whether recovery plans will work as intended, they should be tested periodically in disaster simulation exercises. Losing the capability to process, retrieve, and protect information maintained electronically can significantly affect an agency’s ability to accomplish its mission. If controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete financial or management information. Controls to ensure service continuity should address the entire range of potential disruptions. These may include relatively minor interruptions, such as temporary power failures or accidental loss or erasure of files, as well as major disasters, such as fires or natural disasters that would require reestablishing operations at a remote location. Service continuity controls include (1) taking steps, such as routinely making backup copies of files, to prevent and minimize potential damage and interruption, (2) developing and documenting a comprehensive contingency plan, and (3) periodically testing the contingency plan and adjusting it as appropriate. Service continuity control weaknesses were reported for most of the agencies covered by our analysis. Examples of weaknesses included the following: Plans were incomplete because operations and supporting resources had not been fully analyzed to determine which were the most critical and would need to be resumed as soon as possible should a disruption occur. Disaster recovery plans were not fully tested to identify their weaknesses. At one agency, periodic walkthroughs or unannounced tests of the disaster recovery plan had not been performed. Conducting these types of tests provides a scenario more likely to be encountered in the event of an actual disaster. The audit reports cited in this statement and in our prior information security reports include many recommendations to individual agencies that address specific weaknesses in the areas I have just described. It is each individual agency’s responsibility to ensure that these recommendations are implemented. Agencies have taken steps to address problems and many have good remedial efforts underway. However, these efforts will not be fully effective and lasting unless they are supported by a strong agencywide security management framework. Establishing such a management framework requires that agencies take a comprehensive approach that involves both (1) senior agency program managers who understand which aspects of their missions are the most critical and sensitive and (2) technical experts who know the agencies’ systems and can suggest appropriate technical security control techniques. We studied the practices of organizations with superior security programs and summarized our findings in a May 1998 executive guide entitled Information Security Management: Learning From Leading Organizations (GAO/AIMD-98-68). Our study found that these organizations managed their information security risks through a cycle of risk management activities that included assessing risks and determining protection needs, selecting and implementing cost-effective policies and controls to meet these needs, promoting awareness of policies and controls and of the risks that prompted their adoption among those responsible for complying with them, and implementing a program of routine tests and examinations for evaluating the effectiveness of policies and related controls and reporting the resulting conclusions to those who can take appropriate corrective action. In addition, a strong, centralized focal point can help ensure that the major elements of the risk management cycle are carried out and serve as a communications link among organizational units. Such coordination is especially important in today’s highly networked computing environments. This cycle of risk management activities is depicted below. This cycle of activity, as described in our May 1998 executive guide, is consistent with guidance on information security program management provided to agencies by the Office of Management and Budget (OMB) and by NIST. In addition, the guide has been endorsed by the federal Chief Information Officers (CIO) Council as a useful resource for agency managers. We believe that implementing such a cycle of activity is the key to ensuring that information security risks are adequately considered and addressed on an ongoing basis. While instituting this framework is essential, there are several steps that agencies can take immediately. Specifically, they can (1) increase awareness, (2) ensure that existing controls are operating effectively, (3) ensure that software patches are up-to-date, (4) use automated scanning and testing tools to quickly identify problems, (5) propagate their best practices, and (6) ensure that their most common vulnerabilities are addressed. None of these actions alone will ensure good security. However, they take advantage of readily available information and tools and, thus, do not involve significant new resources. As a result, they are steps that can be made without delay. Due to concerns about the repeated reports of computer security weaknesses at federal agencies, in 2000, the Congress passed government information security reform provisions require agencies to implement the activities I have just described. These provisions were enacted in late 2000 as part of the fiscal year 2001 NationalDefense Authorization Act. In addition to requiring these management improvements, the new provisions require annual evaluations of agency information security programs by both management and agency inspectors general. The results of these reviews, which are initially scheduled to become available in late 2001, will provide a more complete picture of the status of federal information security than currently exists, thereby providing the Congress and OMB an improved means of overseeing agency progress and identifying areas needing improvement. During the last two years, a number of improvement efforts have been initiated. Several agencies have taken significant steps to redesign and strengthen their information security programs; the Federal Chief Information Officers Council has issued a guide for measuring agency progress, which we assisted in developing; and the President issued a National Plan for Information Systems Protection and designated the related goals of computer security and critical infrastructure protection as a priority management objective in his fiscal year 2001 budget. These actions are laudable. However, recent reports and events indicate that they are not keeping pace with the growing threats and that critical operations and assets continue to be highly vulnerable to computer-based attacks. While OMB, the Chief Information Officers Council, and the various federal entities involved in critical infrastructure protection have expanded their efforts, it will be important to maintain the momentum. As we have noted in previous reports and testimonies, there are actions that can be taken on a governmentwide basis to enhance agencies’ abilities to implement effective information security. First, it is important that the federal strategy delineate the roles and responsibilities of the numerous entities involved in federal information security and related aspects of critical infrastructure protection. Under current law, OMB is responsible for overseeing and coordinating federal agency security; and NIST, with assistance from the National Security Agency (NSA), is responsible for establishing related standards. In addition, interagency bodies, such as the CIO Council and the entities created under Presidential Decision Directive 63 on critical infrastructure protection are attempting to coordinate agency initiatives. While these organizations have developed fundamentally sound policies and guidance and have undertaken potentially useful initiatives, effective improvements are not taking place, and it is unclear how the activities of these many organizations interrelate, who should be held accountable for their success or failure, and whether they will effectively and efficiently support national goals. Second, more specific guidance to agencies on the controls that they need to implement could help ensure adequate protection. Currently agencies have wide discretion in deciding what computer security controls to implement and the level of rigor with which they enforce these controls. In theory, this is appropriate since, as OMB and NIST guidance states, the level of protection that agencies provide should be commensurate with the risk to agency operations and assets. In essence, one set of specific controls will not be appropriate for all types of systems and data. However, our studies of best practices at leading organizations have shown that more specific guidance is important. In particular, specific mandatory standards for varying risk levels can clarify expectations for information protection, including audit criteria; provide a standard framework for assessing information security risk; and help ensure that shared data are appropriately protected. Implementing such standards for federal agencies would require developing a single set of information classification categories for use by all agencies to define the criticality and sensitivity of the various types of information they maintain. It would also necessitate establishing minimum mandatory requirements for protecting information in each classification category. Third, routine periodic audits, such as those required in the government information security reforms recently enacted, would allow for more meaningful performance measurement. Ensuring effective implementation of agency information security and critical infrastructure protection plans will require monitoring to determine if milestones are being met and testing to determine if policies and controls are operating as intended. | This testimony discusses GAO's analysis of security audits at federal agencies. The widespread interconnectivity of computers poses significant risks to federal computer systems and the operations and the infrastructures they support. GAO's evaluations show that federal computer systems are riddled with weaknesses that continue to put critical operations and assets at risk. GAO found weaknesses in following six areas: (1) security program management, (2) access controls, (3) software development and change controls, (4) segregation of duties, (5) operating systems controls, and (6) service continuity. Weaknesses in these areas place a broad range of critical operations and assets at risk for fraud, misuse, and disruption. Federal agencies have tried to address these problems, and many have good remedial efforts underway. However, these efforts will not be fully effective and lasting unless they are supported by a strong agencywide security management framework. Establishing such a management framework requires that agencies take a comprehensive approach that involves both (1) senior agency program managers who understand which aspects of their missions are the most critical and sensitive and (2) technical experts who know the agencies' systems and can suggest appropriate technical security control techniques. |
Payers are required to submit 1099-MISCs for a variety of payments made in the course of a trade or business. For 1099-MISC reporting, a trade or business generally includes businesses, non-profit organizations, and federal, state, and local government agencies. The types of payments reportable on a 1099-MISC and their reporting thresholds vary widely. These include payments to nonemployees for services of at least $600 (called nonemployee compensation), royalty payments of $10 or more, and medical and health care payments made to physicians or other suppliers (including payments by insurers) of $600 or more. Personal payments, such as a payment by a homeowner to a contractor to paint his/her personal residence, are not reportable on a 1099-MISC. Other payments that are not reportable on a 1099-MISC generally include payments to a corporation, payments for merchandise, and wages paid to employees. Wages paid to employees must be reported on a form W-2. There are many other types of payments that must be reported and numerous exceptions to these general rules. IRS provides eight pages of instructions detailing what payments to whom are reportable on the 1099-MISC. The form—shown in figure 1—consists of 14 boxes for reporting the various types of miscellaneous payments. The Bush Administration’s fiscal years 2008 and 2009 budgets proposed legislative action further expanding 1099-MISC reporting to include service payments of $600 or more to corporations by all third-party payers. According to the Department of the Treasury’s estimate, the Bush Administration’s fiscal year 2009 budget proposal would generate about $8.2 billion over the 10-year budget period from 2009 through 2018, in part because of increased voluntary compliance and IRS’s ability to detect underreported payments received by businesses. For tax year 2006, more than 5 million payers submitted more than 82 million 1099-MISCs to IRS, reporting over $6 trillion in payments. Nonemployee compensation payments totaled about $2.3 trillion and accounted for about 55 percent of all 1099-MISCs submitted. Medical and health payments totaled about $1.2 trillion and accounted for about 25 percent of all 1099-MISCs submitted. Each of the remaining payment types accounted for less than 10 percent of the number of 1099-MISCs submitted. Figure 2 shows the distribution of 1099-MISC payments types and amounts reported for tax year 2006. In addition to the 8 pages of instructions for 1099-MISC reporting, IRS also has 19 pages of general instructions for third-party information reporting, detailing how and when payers are to submit 1099-MISCs to payees and IRS. Payers must provide 1099-MISC statements to payees by the end of January. Payers submitting fewer than 250 1099-MISCs may submit paper forms, which are due to IRS by the end of February, along with a Form 1096, Annual Summary and Transmittal of U.S. Information Returns. Payers submitting paper 1099-MISC are required to use IRS’s official forms or substitute forms with special red ink readable by IRS’s scanning equipment. Photocopies and copies of the 1099-MISC form downloaded from the internet or generated from software packages in black ink do not conform to IRS processing specifications. Payers submitting 250 or more 1099-MISCs are required to submit the forms magnetically or electronically. Electronic submissions due at the end of March can be submitted through IRS’s Filing Information Returns Electronically (FIRE) system. As shown in figure 3, most 1099-MISCs for tax year 2006 were submitted electronically. However, most payers submit small numbers of 1099-MISCs, and most payers submitted paper 1099-MISCs. IRS’s four business operating divisions are generally responsible for ensuring payers comply with their 1099-MISC reporting requirements. The Wage & Investment Division, Tax Exempt and Government Entities Division (TE/GE), Large and Mid-Size Business Division (LMSB), and Small Business and Self-Employed Division (SB/SE), as a part of their duties, conduct examinations of tax returns and documents to verify compliance with tax laws. Examinations can include checking payer compliance with 1099-MISC reporting requirements. IRS can penalize payers for failing to submit or submitting an inaccurate 1099-MISC. The penalty is generally $50 per information return, increasing to $100 each for intentional payer noncompliance of 1099-MISC requirements. To encourage voluntary reporting compliance, the penalty is $15 (up to a maximum of $75,000 per calendar year or $25,000 for small businesses) if the 1099-MISC is submitted within 30 days of the due date; $30 (up to a $150,000 maximum per calendar year or $50,000 for small businesses) if submitted after 30 days but by August 1, and $50 (up to a maximum of $250,000 per calendar year or $100,000 for small businesses) if submitted after August 1 or not at all. IRS will waive the penalty if the payer can show “reasonable cause” or if the error or omission does not prevent or hinder the IRS from processing the 1099-MISC. In IRS’s fiscal year 2009 budget proposal, the Bush Administration proposed increasing the $50 and $100 penalties to $100 and $250 respectively for each information return. In 2007, we suggested that Congress consider requiring IRS to periodically adjust penalties for inflation, and round appropriately, the fixed dollar amounts of civil tax penalties to account for the decrease in real value over time and so that penalties for the same infraction are consistent over time. Payees are responsible for reporting payments they received from payers on the appropriate lines of their tax returns. Payees are also responsible for paying self-employment taxes if they received nonemployee compensation. For example, a sole proprietor receiving a 1099-MISC for nonemployee compensation is to report the payments on Schedule C of the 1040 tax return and file Schedule SE to pay the associated self- employment taxes. Sole proprietor payees are supposed to report 1099- MISC payments as gross receipts and separately report their expenses rather than reporting only net amounts. Figure 4 shows the automated process IRS uses to detect mismatches between nonemployee compensation and other payments reported on 1099-MISCs and payees’ income tax returns. The Nonfiler program handles cases where no income tax return was filed by a 1099-MISC payee. The Automated Underreporter (AUR) program handles cases where a payee filed a tax return but underreported 1099-MISC payments. AUR’s case inventory includes payee mismatches over a certain threshold, and IRS has a methodology using historical data to select cases for review. AUR reviewers manually screen the selected cases to determine whether the discrepancy can be resolved without taxpayer contact. For the remaining cases selected, IRS sends notices asking the payee to explain discrepancies or pay any additional taxes assessed. According to IRS, third-party information reporting increases voluntary tax compliance in part because taxpayers know that IRS is aware of their income. For wages and salaries subject to tax withholding and substantial third-party information reporting, the percentage of income that taxpayers misreport has consistently been measured at around 1 percent. In contrast, for non-farm sole proprietor income subject to little or no third-party reporting, taxpayers misreported more than half of such income in 2001, according to IRS’s most recent tax gap estimates. IRS does not have an estimate of 1099-MISC reporting compliance or know the characteristics of those payers that fail to comply with the reporting requirements. Without an estimate of payers’ 1099-MISC noncompliance, IRS does not know to what extent such noncompliance allows payees to underreport their income without being detected. If a large number of payers fail to submit required 1099-MISCs, then the resulting decrease in payee tax compliance and lost revenue could be large. According to IRS, it is a common misconception among payees that they are not required to report payments if they have not received a 1099-MISC from payers. IRS has invested significant resources in measuring compliance with other aspects of the tax laws. For example, the National Research Program (NRP) estimated the compliance rate for individual taxpayers for tax year 2001 based on an intensive review of a sample of 46,000 tax returns. IRS is in the process of completing a new study of the rate of tax compliance by individual taxpayers for tax years 2006 and 2007, and is conducting a similar study of S-corporations. IRS uses such research to understand where compliance problems are greatest and to understand the sources of noncompliance. Armed with such understanding, IRS can make better decisions about where and how to deploy its resources to address noncompliance. Our analysis of 1099-MISC submission patterns by small businesses as well as past studies of federal, state and local government agencies suggests that payer noncompliance with 1099-MISC reporting requirements may be potentially significant. Our analysis of IRS’s PMF data for tax year 2005 (the last complete year available) showed that, in aggregate, 8 percent of small businesses (sole proprietorships, and corporations and partnerships with assets under $10 million) submitted a 1099-MISC. As shown in figure 5, over 4 million small businesses submitted 1099-MISCs in tax year 2005, and in comparison, 50 million small businesses filed income tax returns with IRS that same year. Results were similar for the three previous years. The fact that a relatively low percentage of small businesses submitted a 1099-MISC does not indicate on its own that there is a significant payer noncompliance problem. The many exceptions to the general rules for submitting 1099-MISCs along with the payment thresholds mean that many small businesses may not be required to submit a 1099-MISC. However, if even a small proportion of the almost 46 million small businesses that did not submit 1099-MISCs in 2005 improperly failed to report as required, there could be millions of missing 1099-MISC information reports. As a consequence, payees could have less incentive to voluntarily report that 1099-MISC income on their own tax returns if they did not receive a 1099- MISC from the payer, and IRS would be unable to detect payee underreporting through document matching. Yet, IRS has no idea of the magnitude of payer noncompliance and thus the amount of missing 1099- MISCs, as discussed above. As a proxy for a possible 1099-MISC reporting requirement, we examined the 1099-MISC submission rate for Schedule C small businesses that reported amounts of $600 or more in contract labor expenses. Based on IRS’s Statistics of Income (SOI) data for tax year 2006, about 29 percent of Schedule C filers reporting contract labor expenses of $600 or more submitted 1099-MISCs. Again, we could not determine whether the other 71 percent of Schedule C filers reporting contact labor expenses over the 1099-MISC reporting threshold were noncompliant. Some payers may have amounts under the reporting threshold to multiple payees, and other payers may have paid corporate payees currently exempt from 1099-MISC reporting. However, some payers may have failed to submit 1099-MISCs as required, and IRS does not have data to estimate how often this occurs. Our 2003 assessment of federal agency compliance with 1099-MISC reporting requirements did find significant payer noncompliance. While most federal agencies in the 14 departments we studied submitted information returns as required for calendar years 2000 and 2001, there were some significant exceptions. Three federal departments— Agriculture, Commerce and Justice—collectively made $5 billion in payments to 152,000 payees in 2000 and 2001 but did not report the payments to IRS on Form 1099-MISCs. In turn, about 8,800 of the payees who collectively received payments totaling about $421 million dollars— an average of about $48,000 each—did not file income tax returns for those 2 years. In June 2007, TIGTA reported that trends in 1099-MISC reporting by state and local governments demonstrated potential payer noncompliance for tax years 2003 through 2005. For example, while TIGTA found that over half of the 81,000 state and local government entities submitted 1099-MISC forms for each of the 3 years, 30 percent did not submit any 1099-MISC forms over the period. As of December 2008, IRS had research planned for fiscal year 2009 to determine whether state and local governments that did not submit any 1099-MISCs for tax years 2003 through 2005 were noncompliant and reasons why they did not report. Research about the extent and causes of payer noncompliance could help IRS develop more effective strategies to increase 1099-MISC submissions. Such research would involve costs, but there are options for mitigating the costs. IRS might be able to build on current research efforts, such as the NRP, or use existing data from the Payer Master File. Because NRP is already collecting detailed information about small business compliance with the rules for reporting receipts and expenses, the design of the NRP could be tailored at a relatively low cost to assess the extent to which these small businesses submitted 1099-MISCs as required. In the 2006 and 2007 NRP, IRS is studying whether payers that submitted 1099-MISCs correctly classified the payees as nonemployees. By misclassifying employees as nonemployees, employers could avoid withholding taxes as well as paying employment taxes. The 2006 NRP used a supplemental questionnaire to collect information on 1099-MISC payer reporting for use in assessing misclassification issues. The 2007 NRP procedures will direct NRP examiners to more systematically capture data about whether small business payers in the NRP sample were required to submit information returns—including 1099-MISCs—but did not. At this time, IRS has not studied the extent to which payers failed to submit 1099- MISCs, but the future NRP results could be useful for this purpose. Another option IRS could explore at a relatively low cost is using PMF data to identify businesses that stop reporting or never report 1099-MISCs. The PMF data set was used by TIGTA and us in detecting whether federal, state, and local governments reported 1099-MISCs. In turn, IRS used PMF data to select a sample of state and local governments that did not submit any 1099-MISCs for a compliance research project planned for fiscal year 2009. To explore small business payer noncompliance, IRS could use the PMF data to identify those businesses that submitted 1099-MISCs in past years but stopped reporting. IRS also could compare the PMF population to its master file to identify small businesses that never submitted 1099- MISCs, and subsequently conduct research to audit a sample of the population to determine whether those not submitting 1099-MISCs should have. From this type of research, IRS could decide how to target particular segments of the payer population (e.g., Schedule C filers that report contract labor expenses over $600 or more but do not submit 1099-MISCs) for more education and outreach in order to reduce payer noncompliance. Benefits of 1099-MISC payer compliance research could be significant. For perspective, payers reported $6 trillion in 1099-MISC payments for tax year 2006, so a one percent increase in reported payments could result in an additional $60 billion reported to payees and IRS. Within IRS, TE/GE is active in detecting and pursuing 1099-MISC payer noncompliance among federal, state, and local agencies in part so that government contractors and vendors cannot evade their tax liabilities. A TE/GE official said that the division’s focus is on employer quarterly returns that governmental entities file, as opposed to annual income tax returns that the other business divisions examine. According to IRS officials, as a part of this resource-intensive focus on quarterly returns, TE/GE examiners also scrutinize information returns, including 1099- MISCs, that governmental entities file. To illustrate the impact that TE/GE’s efforts had on detecting noncompliance among government payers, IRS audit specialists identified and secured over 30,000 additional 1099-MISCs totaling over $522 million in payments that governmental entities made to payees for tax years 2005 through 2007. According to IRS officials, TE/GE implemented a “stop filer” program in response to our 2003 recommendation that IRS develop a mechanism for identifying and tracking federal agencies that fail to submit Form 1099- MISCs. This automated stop filer notice program, which is a minimal cost approach compared to other enforcement options for detecting possible payer noncompliance, was developed for federal agencies that submitted 1099-MISCs one year but not the next. Using PMF data, TE/GE issues IRS form 3939 to a federal agency that stops submitting 1099-MISCs, asking the agency to provide an explanation. TE/GE officials said they sent over 1,100 stop filer notices for tax year 2006 to federal entities in August 2008. The officials also said the federal agencies’ responses to these notices are useful to TE/GE in selecting federal agencies for voluntary compliance checks or examinations. Servicewide, IRS also uses PMF data to identify payers that submit 1099-MISCs late or with missing tax identification numbers (TINs). However, IRS does not systematically use the database to identify payers with gaps in 1099-MISC reporting history or those that never submit the 1099-MISC forms. At the time of our review, TE/GE officials stated that numerous discrepancies exist in the PMF with the coding for state and local governments, and the division is working to correct them. For fiscal year 2009, TE/GE plans a 1099-MISC compliance check for a random sample of 200 state and local governments that did not submit 1099-MISCs for tax years 2003 through 2005. Once they have completed this activity, they will determine whether sending notices to state and local governmental agencies to check for payer noncompliance with 1099-MISC reporting requirements is an option they should pursue. Another low cost approach that TE/GE officials used is to compile a listing of common 1099-MISC payer compliance problems drawn from information obtained through examinations and compliance checks. For example, government payers sometimes fail to report all payments, or submit 1099-MISCs late or with missing payee information. According to TE/GE officials we interviewed, their list of common 1099-MISC reporting errors is used both to educate examiners on what to look for during examinations and reach out to government agencies to help them better comply. For business taxpayers, IRS policy instructs SB/SE and LMSB examiners to determine whether all information returns, including 1099-MISCs, are submitted as required and consider internal controls for information return reporting. LMSB examiners are to conduct risk analyses of tax returns to identify potential noncompliance issues as part of the audit planning process. As a result of the risk analysis and in contrast with SB/SE practices, LMSB examiners can waive the compliance checks for LMSB taxpayers to ensure efficient and effective use of resources. Where an examination identifies that a business failed to comply with its requirement to submit information returns, including Form 1099-MISC, the examiner is to secure the missing information returns. IRS could not provide data on how many business examinations detected payers that failed to submit 1099-MISCs or how many missing 1099-MISCs have been secured by LMSB and SB/SE examiners. While examinations primarily focus on income and employment tax liabilities for business payers, the examiner also is to consider whether to pursue information return penalties depending on the facts and circumstances of the case. Improved voluntary compliance by the payer and its payees may justify the expenditure of time required to track down the missing 1099-MISCs and assess the penalties. However, the maximum penalty is $50 for unintentional payer noncompliance and $100 for intentional noncompliance for each additional 1099-MISC collected. IRS has proposed increasing information return penalties to $100 and $250, respectively. Further, examinations have limitations in that they are costly and cover relatively few businesses each year, and thus would not necessarily be cost-effective as the sole means to address 1099-MISC payer reporting compliance. For example, IRS data show that IRS examinations covered less than 1 percent of the 2.2 million tax returns that small corporations filed for fiscal year 2007. As we previously reported, IRS examined about 3 percent of Schedule C returns in fiscal year 2006. Beyond conducting examinations, which we have described as having limitations, IRS does not have an agencywide approach in place to identify payers that do not submit 1099-MISC forms. More specifically, IRS does not have a stop filer notice program for businesses, as it does for federal agencies, to detect 1099-MISC reporting gaps. In contrast with the relatively small and stable populations of federal, state, and local government payers, the large and shifting population of small businesses might challenge IRS in designing a cost-effective method for isolating and contacting business payers that do not submit 1099-MISCs. Given that new businesses start each year while others stop operating or merge with other businesses, one approach would be to first check to see whether a business filed a tax return before sending any notice inquiring about 1099- MISC reporting. While notices are likely to provide a more cost-effective approach for pursuing possible payer noncompliance compared to examinations, it would be important for IRS to test a stop filer program to determine how to target notices to businesses. For Schedule C filers for example, a notice program could target payers that reported large contract labor expenses but did not submit 1099-MISCs. Without testing the viability of a broader stop filer notice program, IRS could be overlooking a useful tool that would help increase 1099-MISC payer compliance. According to IRS officials, IRS advisory groups, and members of the 1099- MISC community we interviewed, a variety of impediments inhibit 1099- MISC reporting compliance. As a result, some payers report erroneous information or fail to submit all 1099-MISCs as required. Some payers that do not submit their 1099-MISCs as required may be unaware of their 1099- MISC reporting responsibilities. Other payers may be confused by various aspects of the 1099-MISC requirements. Finally, the inconvenience of submitting 1099-MISCs—whether on paper forms or electronically—may deter compliance. While the extent to which these impediments contribute to payer noncompliance is unknown, interviewees and others identified options for addressing them to promote voluntary compliance. Table 1 highlights options based on our analysis and includes options we previously reported. We note those options that were proposed by IRS, IRS advisory groups, and the National Taxpayer Advocate. Our list of 1099-MISC impediments and options is not exhaustive, nor is the list of pros and cons associated with the options. Improved IRS guidance and education are relatively low-cost options, but most taxpayers use either tax preparers or tax software to prepare their tax returns, and may not read IRS instructions and guidance. While taxpayer service options may improve compliance for those that are inadvertently noncompliant, they are not likely to affect those that are intentionally noncompliant. Some options to change 1099-MISC reporting requirements require legislative action, and other options would be costly for IRS to implement. Where the option involves particular issues, such as cost or taxpayer burden, we note them in our table. According to our interviewees, multiple approaches could help IRS to increase payer compliance with 1099-MISC reporting requirements. For some options, such as eliminating the exemption on reporting corporate payments, the evidence shows that the benefits outweigh the costs. For other options, it is not clear whether the benefits outweigh the associated costs. In those cases, additional research by IRS could help to evaluate the feasibility of more costly options, such as allowing black and white paper 1099-MISCs. Action to move forward on options to target outreach to specific payer groups or clarify guidance to reduce common reporting mistakes would hinge on IRS first conducting research to understand the magnitude of and reasons for payer noncompliance. Adopting strategies, discussed above, to promote voluntary compliance with 1099-MISC reporting requirements and to better monitor payer noncompliance would likely increase the number of 1099-MISCs IRS receives from payers. This in turn would increase the number of automated mismatches identifying potential underreporting by payees. However, the AUR program does not pursue all the mismatches from the 1099-MISCs currently received. Given limited resources for the AUR program, it is important for IRS to find ways to more efficiently expand AUR coverage and select the best 1099-MISC related cases to work. While 1099-MISCs constituted 5 percent of all information returns AUR used to detect underreporting, a significant portion of the AUR cases and assessments were based on 1099-MISC information, as shown in table 2. For tax year 2004 (the last full year available), 19 percent of 1099-MISC related cases were selected for review, yielding 21 percent of the additional tax dollars assessed by the AUR program. From the 1.9 million 1099-MISC related cases with identified income discrepancies, AUR selected a larger proportion (47 percent) for review than from the AUR inventory as a whole (31 percent). Over three-quarters of all 1099-MISC related cases selected involved nonemployee compensation. The remaining 1099-MISC related cases involve other types of 1099-MISC payments, such as those for rent and medical services. For tax year 2004, 1099-MISC related cases in total yielded $972 million additional assessments, accounting for 21 percent of AUR assessments. AUR currently has a limited reach, pursuing less than half of 1099-MISC related cases in its inventory and less than a third of the overall inventory for tax year 2004. Attempting to increase AUR program efficiency, IRS has pilot tested an automated “soft notice” program since 2005. The goal of this program pilot is to increase accurate reporting compliance with minimal additional expenditures for IRS. For this program pilot, AUR first selected cases from the inventory that involved relatively small amounts of money and thus would not have been selected for review, and then expanded the pilot to include cases with higher-dollar potential tax assessments. The 2,505 cases in the pilot test during fiscal years 2005 and 2006 included 550 nonemployee compensation cases based on 1099-MISC information. AUR sent letters to these taxpayers asking them to either fix the identified discrepancy by filing an amended return. However, if the taxpayer’s reported income is correct, IRS encouraged the taxpayer to contact the third party providing the information to IRS. The soft notice is intended to educate and promote future compliance, requiring minimal response to the notice from taxpayers. Based on the pilot, IRS concluded that the soft notice approach increased taxpayer compliance, without placing a heavier burden on AUR resources to respond to taxpayers’ queries. In total for the 2 pilot years, 25 percent of taxpayers receiving AUR soft notices filed amended tax returns, and 78 percent corrected their reporting behavior in the next year’s AUR inventory. Less than 13 percent called IRS to inquire about the soft notices. With phased rollout slated to begin in fiscal year 2009, AUR would be able to achieve a greater coverage over the balance of cases in the inventory beyond those selected for AUR review. Accordingly, the soft notice approach may be an innovative, cost-effective way for IRS to have a greater enforcement presence using 1099-MISC information. However, it is too early to assess whether the effectiveness of the soft notice pilot can be generalized to the AUR program overall. For tax year 2007, IRS plans to send about 30,000 AUR soft notices across the range of AUR income categories—including about 4,400 1099-MISC related cases. As of October 2008, IRS plans to collect data on taxpayer responses and develop an analysis plan to determine which AUR case types are suited for future soft notices. Some cases AUR selected from its inventory are not productive; that is, some cases do not yield any additional tax revenue. About 36 percent of the 1099-MISC related cases selected for tax year 2004 were manually screened out by AUR reviewers without taxpayer contact. Such cases may be screened out because the payee erroneously reports a 1099-MISC payment on the wrong tax return line but paid the correct taxes or because the discrepancy was because of an IRS error in transcribing a paper 1099-MISC. Of the 64 percent of the tax year 2004 1099-MISC cases selected that involve taxpayer contact, about 22 percent yielded no change in tax assessments. These unproductive cases cost IRS time and money that could be spent pursuing other taxpayers who owe additional taxes and burden honest taxpayers who must respond to IRS inquiries. A case may result in no tax change if a taxpayer responds to the AUR notice with information explaining the discrepancy. For example, an unproductive 1099-MISC discrepancy may be because the payer reported payments made to a partnership under an individual partner’s SSN rather than under the partnership TIN. The screen-out process and handling taxpayer contacts are labor-intensive for AUR compared with computer processing, so reducing the number of unproductive cases would free up resources to work more productive cases. IRS officials told us that in fiscal year 2007, IRS implemented a new case selection tool using historical data to target cases with high assessment potential and that this new methodology has yielded progress in terms of increased AUR assessments during fiscal year 2008. An additional approach would be to gain insight into the source of discrepancies between information reported by payers submitting 1099- MISCs and by payees filing tax returns. An understanding of the specific causes could help IRS in evaluating its matching operations and refining the AUR case selection tool for 1099-MISC related cases. Additionally, the information would be useful to help IRS target activities to clarify guidance or target outreach to educate payers or payees to avoid common reporting errors. Currently, IRS does not systematically collect and analyze information on the causes of unproductive mismatches that would allow it to determine how best to reduce or eliminate such mismatches. The AUR management information system has codes showing whether the case was closed with or without a tax change, but does not specify how AUR accounted for the discrepancy. For example, no change closing codes do not specify whether the discrepancy was because of payer misreporting, payee misreporting, or IRS error in transcribing paper 1099-MISCs. Additionally, IRS does not routinely collect data on the screen-out process, so IRS does not have information on the nature and cause of recurring 1099-MISC discrepancies. According to AUR officials, the AUR program periodically does special analyses to identify how to reduce screen-out rates but has not specifically studied 1099-MISC related cases. Capturing information on specific reasons why cases were unproductive is one approach to improve AUR’s efficiency. The information would be useful to the AUR program in making informed decisions on how to improve the match process and refine its case selection methodology for 1099-MISC related cases. Moreover, IRS could draw on AUR data to identify common 1099-MISC reporting errors and determine how to target service activities to improve payer and payee reporting. For example, IRS could avoid some unproductive AUR cases by reminding taxpayers doing business as a corporation or partnership to provide their business TIN rather than their SSNs to payers. Insight about productive AUR cases also could help IRS identify opportunities to educate taxpayers on how to avoid common mistakes and correctly report 1099-MISC payments. The 1099-MISCs are a powerful tool through which IRS can encourage voluntary compliance by payees and detect underreported income of payees that do not voluntarily comply. However, IRS has limited knowledge about the extent of payer noncompliance with 1099-MISC reporting requirements. If 1099-MISC reporting compliance increased by even one percent, it could result in an additional $60 billion of payments reported. Without better information about the extent and causes of payer noncompliance, IRS has no way of determining if placing a heavier emphasis or shifting more resources toward addressing 1099-MISC payer noncompliance could lead to an increase in payee voluntary compliance and ultimately help reduce the tax gap. IRS could make better use of existing data to detect some kinds of payer noncompliance. Extending the stop filer notice program used for federal payers may be one tool for IRS to reach out to other government and business payers that drop off the radar. Developing an estimate of payer noncompliance and the characteristics of those payers would be key for IRS in developing a cost-effective strategy to identify payers that never submit 1099-MISCs. Another approach for increasing 1099-MISC reporting compliance is for IRS to address the variety of impediments facing payers preparing and submitting 1099-MISCs. Eliminating the reporting exemption for payments to corporations would ease payer burden associated with first determining the status of their payees to identify whether payments are reportable. As early as 1991, we determined that the benefits in terms of increased tax revenue and voluntary taxpayer compliance would exceed the costs of extending 1099-MISC reporting, although we did not formally recommend the matter for congressional consideration at that time. IRS agrees that the benefits of eliminating the corporate exemption outweigh the costs, and the Bush Administration has proposed legislative action in its last two budgets. Although it is unclear the extent to which taxpayers read guidance on reporting requirements, especially taxpayers who use paid preparers, options for additional guidance and general reminders are a low cost way to help payers understand whether they have a 1099-MISC reporting requirement. For other options where it is unclear whether the benefits outweigh the associated costs, additional research by IRS could help to evaluate whether specific options would be feasible or effective in increasing payer compliance. In turn, IRS research and other activities aimed at increasing payer reporting compliance would likely increase the number of 1099-MISC related AUR cases. Reducing the number of unproductive cases would free up resources for IRS to handle this increased workload and make better use of the 1099-MISC information it receives. To simplify the burden that the corporate exemption places on payers to distinguish payees’ business status and also provide greater information reporting, Congress should consider requiring payers to report payments to corporations on the form 1099 MISC, as we previously suggested and as proposed in the Bush Administration’s budget. We are making eight recommendations to the Commissioner of Internal Revenue. To gauge the extent of 1099-MISC payer noncompliance and its contribution to the tax gap, we recommend that the Commissioner of Internal Revenue as part of future research studies develop an estimate of 1099-MISC payer noncompliance and determine the nature and characteristics of those payers that do not comply with 1099-MISC reporting requirements so that this information can be factored into an IRS-wide strategy for increasing 1099-MISC payer compliance. To increase IRS’s ability to detect 1099-MISC payer noncompliance, we recommend that the Commissioner of Internal Revenue test the option of developing a stop filer notice program to target business, state, and local entities that submitted 1099-MISC one year but did not do so the next. To help payers better understand their 1099-MISC reporting responsibilities, we recommend that the Commissioner of Internal Revenue add a general reminder to Publication 535 Business Expenses to highlight 1099-MISC reporting responsibilities; assess whether adding a checkbox to business tax returns, inquiring whether all 1099-MISCs have been submitted, to serve as a reminder to payers would help increase 1099-MISC payer compliance; and include a chart on the Form 1099-MISC as well as business income tax instructions for distinguishing reportable from non-reportable payments and for calculating whether reportable payments reached the 1099-MISC reporting threshold. To reduce the submission burden facing many payers each submitting small numbers of 1099-MISCs, we recommend that the Commissioner collect data on the numbers of computer-generated black and white 1099-MISCs submitted by payers and the labor spent reentering forms that cannot be scanned, and evaluate the cost-effectiveness of eliminating or relaxing the red ink requirement. To help IRS improve its use of 1099-MISC information, we recommend that the Commissioner collect and analyze data on the types of unproductive AUR cases to help identify reoccurring errors for use in the AUR case selection process and for identifying ways to improve guidance and outreach to help payers and payees more accurately report 1099-MISC payments. In written comments on a draft of this report (which are reprinted in appendix II), IRS’s Deputy Commissioner for Services and Enforcement acknowledged that the evidence in our report indicates that the number of 1099-MISCs that payers are required to submit could be much higher than what IRS currently receives. IRS agreed with six of our eight recommendations. IRS staff provided technical comments that we incorporated as appropriate. IRS agreed to gather additional data from its ongoing and planned NRP studies to determine the extent of 1099-MISC noncompliance. IRS also agreed to determine the nature and characteristics of noncompliant 1099- MISC payers once several years of reporting compliance data are available. In addition, IRS agreed to (1) analyze PMF data and develop a 1099-MISC stop filer notice test; (2) evaluate the cost effectiveness of eliminating or relaxing the red ink requirement; and (3) analyze data for a sample of AUR cases to identify opportunities to improve case selection and outreach and education for payers and payees. IRS disagreed with our recommendation to assess whether adding a checkbox to business tax returns would increase 1099-MISC reporting compliance. IRS agreed to enhance instructions about 1099-MISC reporting requirements to improve voluntary compliance by payers. We do not believe this is fully responsive to our recommendation. IRS stated that a similar question was removed from the corporate tax return after the Paperwork Reduction Act of 1980 was enacted. IRS said that the act requires reducing unnecessary burden on taxpayers and prohibits collecting information already available. We recognize that the Paperwork Reduction Act requires agencies to certify that any collection of information avoids unnecessary duplication and is necessary for the proper performance of the functions of the agency, including whether the information has practical utility. In recommending that IRS explore the potential for this option to increase 1099-MISC reporting, we believe information about the experience of California and other states using a similar checkbox query could yield insight on how this option could improve payers’ reporting compliance by reminding payers of their reporting obligations. As discussed in this report, many taxpayers rely on tax preparers and tax software and may not look at IRS guidance. For this reason, the checkbox option—which we clarified would require taxpayers to respond under penalty of perjury—might be more effective because it would force tax preparers and software to query taxpayers about their expenses. Further, results from the evaluation we recommend could be useful to IRS in revisiting its 1981 assessment and weighing the benefits and burdens associated with the checkbox option. We clarified in the report that the National Taxpayer Advocate has reported that the taxpayer burden associated with the checkbox option would be small. IRS also disagreed with our recommendation to include a chart in the 1099-MISC instructions and business income tax instructions. IRS stated that the Form 1099-MISC instructions already contain two bulleted lists describing which payments are reportable as well as explanations of the rules for specific payment types. However, these two lists as well as a third bulleted list describing reportable payments to corporations include 19 bullet points spanning two pages of the eight pages of single-spaced 1099-MISC instructions. We added an example to the report citing a chart in IRS’s 19-page general instructions that highlights what payments and amounts to report on the Form 1099-MISC. We believe that the chart approach is an effective way to provide taxpayers with a quick guide for navigating the detailed instructions for the Form 1099-MISC. For this reason, we continue to recommend adding a chart to the 1099-MISC instructions and business tax instructions. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me on (202) 512- 9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this report were to determine: (1) what IRS knows about 1099-MISC reporting noncompliance by payers; (2) how IRS detects and pursues 1099-MISC reporting noncompliance by payers; (3) what impediments payers encounter in preparing and submitting accurate 1099- MISC forms and what options could help IRS address these impediments; and (4) what opportunities exist to enhance IRS’s use of 1099-MISC information to both detect payee noncompliance and promote voluntary compliance. For background about 1099-MISC reporting requirements, we reviewed laws and regulations as well as IRS guidance related to the Form 1099- MISC and also spoke with IRS officials. For background about the numbers and amounts of payments reported on the 1099-MISC, we obtained information reporting program data from Martinsburg Computing Center for tax year 2006. We also obtained data from IRS’s Payer Master File (PMF) on the aggregate numbers of payers submitting 1099-MISCs on paper and electronically for tax year 2006. We determined that these data were sufficiently reliable for our descriptive purposes. To determine what IRS knows about the extent of 1099-MISC payer noncompliance, we reviewed IRS documents including plans for reducing the federal tax gap and National Research Program (NRP) as well as budget proposals to expand 1099-MISC reporting. Other reports we reviewed include past GAO and Treasury Inspector General for Tax Administration (TIGTA) reports on 1099-MISC reporting compliance by federal, state and local government entities. We also interviewed NRP officials and staff about research on 1099-MISC reporting compliance. To obtain perspective on the potential magnitude of payer noncompliance, we compared the number of small businesses filing tax returns with number of small businesses submitting 1099-MISCs for tax years 2002 to 2005. We used IRS’s definition of small businesses—businesses entities including sole proprietorships, S-corporations, and partnerships with assets under $10 million—under supervision of IRS’s Small Business and Self-Employed (SB/SE) business operating division. We obtained total numbers of small business tax returns submitted in these four years from IRS’s Business Master File and the Individual Master File. For these IRS databases, we relied on the work we perform during our annual audits of IRS's financial statements. While our financial statement audits have identified some data reliability problems associated with coding some fields in IRS's tax records, we determined that the tax form count data were sufficiently reliable to address the report's objectives. We then compared the aggregate number of payers identified as small businesses from IRS’s PMF database to calculate the percentage of small business tax filers that submitted 1099-MISCs. The last complete year of payer type information available at the time of our analysis was 2005. While we could not isolate which businesses were required to submit a 1099-MISC but did not, we determined the data were sufficiently reliable to show how many small businesses submitted 1099-MISCs to IRS. We could not produce a comparable 1099-MISC reporting percentage for large corporations and partnerships under supervision of IRS’s Large and Mid-Size Business (LMSB) business operating division. Large businesses may file a consolidated corporate income tax return under the parent company’s taxpayer identification number (TIN) for all its subsidiaries but submit 1099-MISCs under the individual subsidiaries’ TINs. To obtain additional perspective on potential 1099-MISC payer reporting noncompliance among small businesses, we examined the 1099-MISC submission rates for a sample of small business Schedule C filers. Contract labor payments to a non-incorporated payee totaling $600 or more are reportable on a 1099-MISC, so we used contract labor line on Schedule C as proxy for a possible 1099-MISC reporting requirement. We included all Schedule C filers that reported $600 or more contract labor expenses from IRS’s Statistics of Income (SOI) for tax year 2006 (the last year available). We identified the Schedule C filers and provided their TINs to IRS. IRS provided data on whether these filers submitted a 1099-MISC as indicated on the IRS’s Information Return Master File. This resulted in 44 percent of the SOI sample (unweighted) matching 1099-MISC forms. We then used these matches to produce generalizable estimates of Schedule C filers reporting $600 or more in contract labor expenses. Using SOI sampling weights, we provide the margin of error based on 95 percent confidence for our SOI estimate. We determined the SOI results were reliable for estimating how many Schedule C filers who reported contract labor submitted a 1099-MISC. However, we could not discern whether those that did not submit 1099-MISCs had a filing requirement due to the exceptions for any payment under $600 and payments to corporations. To determine how IRS detects and pursues payer noncompliance with 1099-MISC reporting requirements, we reviewed IRS’s procedures for checking compliance used by IRS’s business operating divisions—Tax Exempt and Government Entities Division (TE/GE), LMSB, and SB/SE. We also interviewed IRS examination and compliance staff from each of these divisions. Within TE/GE, we spoke with IRS officials responsible for working with federal, state and local government entities as well as Indian tribal governments. We interviewed IRS officials and staff about information returns processing and related penalties. Data on IRS’s small business examination coverage came from IRS’s publicly available Data Book. We believe the data were sufficiently reliable for the purposes of our review. To identify impediments that payers encounter with 1099-MISC reporting, options and challenges IRS confronts in addressing these concerns, we reviewed IRS and IRS advisory committee reports, previous GAO reports as well as those of the National Taxpayer Advocate. Further, we reviewed IRS’s 1099-MISC form, instructions, and related guidance in addition to outreach material used to educate payers about 1099-MISC reporting requirements. We interviewed members of IRS advisory groups— Electronic Tax Administration Advisory Committee (ETAAC), Information Reporting Program Advisory Committee (IRPAC), and Internal Revenue Service Advisory Council (IRSAC). To obtain perspectives from tax preparers and others professionals knowledgeable about 1099-MISC payers, we interviewed attendees at IRS’s fall 2007 National Public Liaison meeting that included members of national stakeholder organizations, business and professional associations, tax professionals who prepare and submit forms to the IRS, and tax software vendors. We also observed IRS’s November 2007 National Phone Forum on Form 1099-Information Reporting and reviewed IRS’s question and answer summary provided to participants. We also reviewed California Franchise Tax Board’s corporation and S corporation tax return forms and interviewed the California officials about their experience with the check-the-box question on the California business returns. To determine IRS’s use of 1099-MISC information, we reviewed IRS guidance for 1099-MISC submission processing and the Automated Underreporter (AUR) program. We also interviewed IRS’s Martinsburg Enterprise Computing Center, AUR, and nonfiler program officials and staff. We obtained program data on the AUR inventory, case selection, and additional dollars assessed for tax year 2004, the last full year available in time for this report. The number of information returns submitted to IRS came from SOI’s Data Book. We determined that the data we used were sufficiently reliable for the purpose of our review. IRS nonfiler officials stated they are not able to distinguish nonfiler income tax return cases that were identified through 1099-MISC information from those identified through its stop filer program that identifies a gap in a taxpayer’s filing of tax returns. Consequently we were unable to quantify the extent to which 1099-MISC information is used to detect payee nonfiling in the program. We conducted this performance audit from June 2007 through January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, MaryLynn Sergent, Assistant Director; Jeff Arkin; Bertha Dong; Ellen Grady; Leon Green; Shirley Jones; Donna Miller; Karen O’Conor; Jessica Thomsen; Cheri Truett; James Ungvarsky; Shana Wallace; and John Zombro made key contributions to this report. Tax Gap: Actions That Could Improve Rental Real Estate Reporting Compliance. GAO-08-956. Washington, D.C.: August 28, 2008. Highlights of the Joint Forum on Tax Compliance: Options for Improvement and Their Budgetary Potential. GAO-08-703SP. Washington, D.C.: June 2008. Tax Administration: Costs and Uses of Third Party Information Returns. GAO-08-266. Washington, D.C.: November 20, 2007. Tax Compliance: Inflation Has Significantly Decreased the Real Value of Some Penalties. GAO-07-1062. Washington, D.C.: August 23, 2007. Tax Gap: A Strategy for Reducing the Gap Should Include Options for Addressing Sole Proprietor Noncompliance. GAO-07-1014. Washington, D.C.: July 13, 2007. Tax Compliance: Multiple Approaches Are Needed to Reduce the Tax Gap. GAO-07-488T. Washington, D.C.: February 16, 2007. Opportunities for Congressional Oversight And Improved Use of Taxpayer Funds: Budgetary Implications of Selected GAO Work. GAO-04-649. Washington, D.C.: May 4, 2004. Tax Administration: More Can Be Done to Ensure Federal Agencies File Accurate Information Returns. GAO-04-74. Washington, D.C.: December 5, 2003. Tax Administration: Federal Agencies Should Report Service Payments to Corporations. GAO/GGD-92-130. Washington, D.C.: September 22, 1992. Tax Administration: Benefits of a Corporate Document Matching Program Exceed the Costs. GAO/GGD-91-118. Washington, D.C.: September 27, 1991. | Third party payers, often businesses, reported $6 trillion in miscellaneous income payments to IRS in tax year 2006 on Form 1099- MISC information returns. Payees are to report this income on their tax returns. Even a small share of payers failing to submit 1099-MISCs could result in billions of dollars of unreported payments. IRS data suggest that payees are more likely to report income on their tax returns if IRS receives payers' information returns. GAO was asked to examine 1099- MISC reporting including the extent to which payers fail to submit 1099-MISCs; impediments to payers to submitting1099-MISCs; and whether IRS could better use the 1099-MISCs it currently receives. GAO reviewed IRS documents and compliance data and interviewed officials from IRS, its advisory groups, and others who advise 1099-MISC payers. The Internal Revenue Service (IRS) does not know to what extent payers fail to submit required 1099-MISCs, but various sources point to the possibility of a significant problem. For tax year 2005, 8 percent of the approximately 50 million small businesses with assets under $10 million submitted 1099-MISCs, but IRS does not know how many of the other 92 percent were required to report payments but did not. Many business payments, such as payments to corporations, are not subject to 1099-MISC reporting. If even a small share of the businesses that did not submit a 1099-MISC should have, millions of 1099- MISCs could be missing with significant amounts of unpaid taxes by payees. GAO's prior work in 2003 found significant 1099-MISC payer noncompliance by some federal agencies. IRS could mitigate costs for research on payer noncompliance by building on its existing research programs. Payers face a variety of impediments that may contribute to 1099-MISC noncompliance, including complex reporting requirements and an inconvenient submission process. For example, certain payments to unincorporated persons or businesses are subject to 1099-MISC reporting, but payments to corporations generally are not, requiring payers to determine the status of their payees. GAO in the past determined that the benefits in terms of increased tax revenue and improved taxpayer compliance justify eliminating this distinction. IRS agrees, and the Bush Administration's proposal to do so would have required legislative action. Other options to remind payers about their reporting obligations include adding a tax return checkbox asking if payers submitted required 1099-MISCs and adding a chart to help payers navigate the detailed instructions for the Form 1099-MISC. IRS matches what the payees report on their tax returns to what payers report on 1099-MISCs to detect payees underreporting income and taxes. But IRS does not pursue all mismatches its computers detect. If IRS were to increase payer compliance with 1099-MISC requirements, the number of mismatches would likely increase. However, IRS does not systematically collect information on the causes of mismatches or whether they could be prevented. |
DOD stressed the importance of strategic human capital management in its 2006 QDR. For example, it noted the importance of involving senior leadership in this area and stated that DOD must (1) compete effectively with the civilian sector for highly qualified personnel, (2) possess an up-to- date human capital strategy, and (3) have the authorities to recruit, shape, and sustain the force it needs. Within the department, the Under Secretary of Defense for Personnel and Readiness, who serves as the Chief Human Capital Officer for DOD, has overall responsibility for the development of DOD’s civilian human capital strategic plan and competency-based workforce planning. The Deputy Under Secretary of Defense for Civilian Personnel Policy has the lead role in developing and overseeing the implementation of the civilian human capital strategic plan. In January 2006, section 1122 of the FY 2006 NDAA was enacted. It directed DOD to develop and submit to the Senate and House Armed Services Committees a strategic plan to shape and improve the DOD civilian employee workforce. The plan was to include eight requirements. These requirements included an assessment of the critical skills that will be needed in the future DOD civilian employee workforce to support national security requirements and effectively manage the department over the next decade, the critical competencies that will be needed in the future DOD civilian employee workforce to support national security requirements and effectively manage the department over the next decade, the skills of the existing DOD civilian employee workforce, the competencies of the existing DOD civilian employee workforce, the projected trends in that workforce based on expected losses due to retirement and other attrition, and gaps in the existing or projected DOD civilian employee workforce that should be addressed to ensure that the department has continued access to the critical skills and competencies needed to support national security requirements and effectively manage the department over the next decade. Also, as part of its civilian human capital strategic plan, the act directed DOD to include a plan of action for developing and reshaping the DOD civilian employee workforce to address identified gaps in critical skills and competencies, including specific recruiting and retention goals, including the program objectives of the department to be achieved through such goals; and strategies for developing, training, deploying, compensating, and motivating the DOD civilian employee workforce and the program objectives to be achieved through such strategies. In October 2006, the FY 2007 NDAA was enacted. Section 1102 of this act required DOD to include in its March 1, 2007, update a strategic plan to shape and improve its senior leader workforce. The plan was to include nine requirements. These nine requirements included an assessment of the needs of DOD for senior leaders in light of recent trends and projected changes in the mission and organization of the department and in light of staff support needed to accomplish that mission, the capability of the existing civilian employee workforce to meet requirements relating to the mission of the department, and gaps in the existing or projected civilian employee workforce of the department that should be addressed to ensure continued access to the senior leader workforce DOD needs. Also, as part of its civilian human capital strategic plan, the act directed DOD to include a plan of action for developing and reshaping the senior leader workforce to ensure the department has continued access to the senior executives it needs. The plan of action is to include any legislative or administrative action that may be needed to adjust the requirements applicable to any category of civilian personnel identified or to establish a new category of senior management or technical personnel, any changes in the number of personnel authorized in any category of personnel identified that may be needed to address such gaps and effectively meet the needs of the department, any changes in the rates or methods of pay for any category of personnel identified that may be needed to address inequities and ensure that the department has full access to appropriately qualified personnel to address such gaps, specific recruiting and retention goals, including the program objectives of the department to be achieved through such goals, specific strategies for developing, training, deploying, compensating, motivating, and designing career paths and career opportunities for the senior leader workforce of the department, including the program objectives to be achieved through such strategies; and specific steps that the department has taken or plans to take to ensure that the senior leader workforce is managed in compliance with the requirements of section 129 of title 10, United States Code. To conduct the assessments of end strength and projected trends in the civilian workforce based on expected losses due to retirement and other attrition as required in the legislation, the department used OPM’s workforce forecasting software Workforce Analysis Support System (WASS) and Civilian Forecasting System (CIVFORS). WASS is used to evaluate workforce trends and can perform simple to complex analyses from counts and averages to trend analyses, using such characteristics as employee age, retirement plan participation, and historical retirement data. CIVFORS was adapted from an Army military forecasting model for civilian use in 1987 and uses data from DOD’s Defense Civilian Personnel Data System (DCPDS). CIVFORS is a life-cycle modeling and projection tool, which models most significant events, including personnel actions such as promotions, reassignments, and retirements. Officials can use a default projection model or create their own, which can be tailored to examine issues such as projected vacancies of hard-to-fill occupations or turnover in specific regions by occupation. The workforce forecasts are generated over a 7-year projection period, using the most recent 5 years of historical data. While CIVFORS is used at the DOD enterprisewide level, the department has not directed the components to use the system. As a result, components use various systems and approaches for their forecasts and trend analyses. For example, when we met with officials from the Defense Logistics Agency (DLA) they noted that while they had received training on WASS/CIVFORS, the agency was not currently using the program, though they were having discussions to determine if they wanted to use the system in the future. Currently, DLA conducts workforce analysis by reviewing past information to determine future needs and uses a commercial off-the-shelf business software package to assist in the analysis. DOD has made progress in implementing the eight requirements in the FY 2006 NDAA as compared to its first plan; however, as seen in table 1, the 2008 update only partially addresses each of the eight requirements. For example, DOD—through the department’s functional and human resource leadership—identified 25 enterprisewide mission-critical occupations but did not provide an assessment that covered a 10-year period as required by the FY 2006 NDAA. Additionally, DOD provided projected trend data related, for example, to expected losses due to retirement on 11 of the 25 enterprisewide mission-critical occupations. Furthermore, DOD’s 2008 update only included gap analyses for about half of the 25 identified enterprisewide mission-critical occupations. DOD’s update also partially addresses the legislative requirements for a plan of action to develop and reshape the civilian employee workforce. More importantly, the recently established program management office does not have a performance plan to articulate how it will address the legislative requirements. DOD’s 2008 update partially addresses the legislative requirements of the FY 2006 NDAA to assess existing and future critical skills and competencies over the next decade and projected trends of the DOD civilian employee workforce. For example, DOD identified—through the department’s functional and human resource leadership—25 enterprisewide mission-critical occupations but did not provide an assessment of future enterprisewide mission-critical occupations that covered a 10-year period. As shown in table 2, DOD provided assessments of current and future enterprisewide mission-critical occupations and projected trend data in separate sections of its update. Specifically, the update had separate explanatory appendixes that addressed assessments for 12 of the 25 enterprisewide mission-critical occupations. These appendixes included assessment information compiled by OSD and the components. The update also had two separate appendixes with OSD-identified workforce assessments for the department and the components, along with projected trends data related to expected losses from retirements and attrition for 11 of the 25 DOD enterprisewide mission-critical occupations. These data were obtained from the WASS/CIVFORS analyses and are also presented in table 2. DOD’s update provided assessments for 12 of the 25 existing and future critical skills and competencies, which OSD refers to as enterprisewide mission-critical occupations. As shown in table 2, DOD included explanatory appendixes on some of the enterprisewide mission-critical occupations, including information technology management, computer science, and logistics management. Specifically, these appendixes discussed various issues, including end strength of the existing workforce for these 12 enterprisewide mission-critical occupations during fiscal year 2007. The update also contained two separate appendixes with OSD-identified workforce assessments for the department and the components, along with projected trends data related to expected losses from retirements and attrition for 11 of the 25 DOD enterprisewide mission-critical occupations. The update noted that the department had just begun its assessments of the enterprisewide mission-critical occupations. It further noted that, to establish a baseline for its civilian workforce, the department decided to hold future workforce levels for the majority of the enterprisewide mission-critical occupations at the then 2007 level of employment—“steady state”—through 2014. This “steady state” would be maintained by controlling gains like “new hires.” For example, in the appendix that contained the contracting enterprisewide mission-critical occupation, DOD noted that the 2007 end strength level was 19,090 and that this steady state could be maintained at the fiscal year 2007 end strength levels of 19,090 through 2014, by controlling gains like new hires. However, DOD’s update does not include an assessment of its future enterprisewide mission-critical occupations that cover a 10-year period, as required by the FY 2006 NDAA. DOD officials told us that the modeling tool it used to assess its workforce, WASS/CIVFORS, only generates forecasts for a 7-year period, in line with the department’s budget—the Future Years Defense Program. DOD officials have noted that it is difficult to conduct workforce planning out to 10 years, especially in light of factors that cannot be predicted, like the Global War on Terror and economic factors. On the other hand, some factors that could affect human capital planning are known well in advance, such as eligibility for retirement and the development of weapons systems that could take more than 10 years. As seen in table 2, DOD’s update contained projected trend data on expected losses from retirement and other attrition for 11 of DOD’s 25 enterprisewide mission-critical occupations and thus partially addressed the legislative requirement. Again, OSD used OPM’s WASS/CIVFORS projection tool to fulfill the legislative requirement for DOD to assess the projected trends in the civilian workforce. WASS/CIVFORS was used to develop charts on workforce demographics for the 11 enterprisewide mission-critical occupations identified. For the medical occupations of physician, nurse, and pharmacist, as an example, the projected trends data show that the majority of the department’s projected losses in the medical community were due to transfers to other federal agencies, movement to the private sector, or internal transfers within DOD components. As seen in table 1, DOD has made progress in assessing the gaps in its civilian workforce since the publication of its 2007 civilian human capital strategic plan. Specifically, DOD’s 2008 update notes that its approach to gap analysis has been both centralized at the OSD level—with focus on the enterprisewide mission-critical occupations—and decentralized within the components. While not clearly identified as a gap assessment in the update, DOD provided data, at the OSD centralized level, from WASS/CIVFORS that showed end strength being maintained at the 2007 level—steady state—through 2014 for 10 of the previously mentioned 11 enterprisewide mission-critical occupations. As stated before, the update noted that this steady state would be maintained by controlling gains in its workforce. Specifically, of the 11 enterprisewide mission-critical occupations for which OSD projected trends, OSD forecasted that 10 could be maintained at a steady state—for a baseline, as previously noted. We also note that, for the civil engineering enterprisewide mission-critical occupation, DOD did not project a steady state for this occupation but rather identified a gap, stating that the projected gains would not meet projected losses. With this steady state assumption, OSD assumes that its goals for projected total gains will be achieved. We note however, that if these gains—that is, new hires or transfers from other government agencies—are not attained, then a potential gap exists. Furthermore DOD officials told us these steady state projections do not incorporate changes in workforce requirements resulting from initiatives like the “Gansler report”—which identified a need for additional contracting officials. We were told that the department would incorporate such changes in future updates. These changes could affect the size of the workforce. At the time of our review, the department had asked the components and functional community managers to validate the projected trends, which was originally expected to be completed by July 1, 2008. However, DOD officials stated that the functional community managers had not yet validated OSD’s projections because they had to be trained on WASS/CIVFORS first. DOD officials said this training occurred in September 2008. At the time of our review, the revised completion date to validate the projected trends was January 2009. As previously stated, the update contained explanatory appendixes that specifically identified gap assessments for 6 enterprisewide mission- critical occupations: civil engineering, human resource management, information technology management, computer engineering, computer scientist, and logistics management. Various methods and tools were used for these assessments—from discussions about the gaps to use of tools other than WASS/CIVFORS. For example, OPM’s Federal Competency Assessment Tool for Human Resources was used to conduct a competency gap assessment for the human resource management enterprisewide mission-critical occupation. Additionally, the assessment indicated that there were gaps in the employee relations and compensation competencies, among others. On the other hand, in the information technology appendix—which includes the enterprisewide mission-critical occupations information technology management, computer engineering, and computer scientist—a federal survey was used and administered to the information technology community. The assessment identified gaps in, among other areas, information systems security certification and network security. DOD’s update also included some competency gap analyses at the component level, in addition to the enterprisewide mission-critical occupation gap analyses. For example, DOD’s update noted that the Defense Information Systems Agency (DISA) conducts gap analyses by having its employees do self-assessments to determine their proficiency level in the skills needed for their competency or career field. The update noted that all DISA employees are required to complete the competency gap assessment process and have a completed individual development plan. The update also stated that the Defense Threat Reduction Agency has completed the first competency gap assessment for 250 of its research and development workforce personnel. DOD officials acknowledge that work on the gap analyses for its 25 enterprisewide mission-critical occupations is not complete and efforts at the component level are ongoing. In addition, the update notes that the department has developed a proposed plan to identify and address future gaps. As mentioned previously, the functional community managers were tasked to validate and provide information for the projected trends; however, at the time of our review, some of the functional community managers were just established, and OSD officials said, as a result, the department published what it had. As we previously reported, the absence of fact-based gap analyses can undermine an agency’s efforts to identify and respond to current and emerging challenges. Without including gap analyses for each of the areas DOD has identified as mission-critical, DOD and the components may not be able to design and fund the best strategies to fill their talent needs or to make the appropriate investments to develop and retain the best possible workforce. As seen in rows 7 and 8 of table 1, we found that DOD partially addressed the legislative requirements for a plan of action for developing and reshaping the civilian employee workforce to address the gaps in critical skills and competencies identified. DOD’s update contains recruiting and retention goals for 11 of the 25 enterprisewide mission-critical occupations, which were developed with the WASS/CIVFORS projection tool; however, as stated previously, these forecasts cover 7 years, not 10 years as required by the FY 2006 NDAA. At the time of this review, DOD was just starting the process for developing projected trends for its enterprisewide mission-critical occupations to determine, among other things, overall workforce needs and retention goals for 11 of its 25 enterprisewide mission-critical occupations over a 7- year period. DOD’s update states that the department is in the process of identifying 10-year recruiting and retention goals for all of the enterprisewide mission-critical occupations and expects to complete this effort by the end of calendar year 2008; however, DOD did not provide additional information on these assessments before completion of this review. Furthermore, DOD’s update does not link specific recruiting and retention goals to program objectives. Additionally, DOD’s update contains strategies for recruiting and retaining civilian employees in the appendixes that discuss the enterprisewide mission-critical occupations. For example, for the medical occupations, DOD formed the Tri-Service Medical Recruitment Workgroup in 2007 to, among other things, analyze current recruitment, hiring, and retention strategies for civilian health care positions. Some accomplishments of the workgroup include creation of a DOD medical recruitment sub-Web on the DOD Civilian Personnel Management Service Web site, guidance on the use of referral bonuses as a recruitment tool, and development of a handbook for recruiters and managers on compensation and hiring flexibilities. DOD’s update does contain some strategies for developing, training, deploying, compensating, and motivating the civilian workforce. Specifically, DOD’s update discusses strategies to address workforce requirements in the explanatory appendixes, which cover 12 of the 25 enterprisewide mission-critical occupations identified. For example, the FY 2008 NDAA granted DOD the authority to implement a modified version of the Information Technology Exchange Program, which would allow DOD civilians in the IT community to conduct job details in the private sector. However, because DOD has not completed its assessment of all 25 enterprisewide mission-critical occupations, any plan of action the department develops will not address gaps that have yet to be identified. While DOD’s update contains an extensive list of strategies, it does not address the requirements of the law—that the strategies be specifically related to gaps in the enterprisewide mission-critical occupations. Furthermore, DOD’s update does not link specific strategies for developing, training, deploying, compensating, and motivating the civilian workforce to program objectives. OSD officials stated that they are working to more fully address all of the legislative requirements, and in November 2008 OUSD (P&R) officially established a program management office—whose responsibility is to, among other things, specifically monitor and review DOD’s enterprisewide mission-critical occupation assessments, workforce trends, and gap analyses. According to DOD, the budget for this office included salary and benefits for 20 people and training for human resource consultants on strategic human capital management. While it is notable the office has been established, at the time of our review, DOD officials stated that they did not have and did not plan to have a performance plan or “road map” to articulate how the department will fully address requirements of the FY 2006 NDAA. Additionally, we note that, prior to the establishment of this new office, the Program Executive Office for Strategic Human Capital Planning, per DOD officials, had responsibility to develop DOD’s civilian human capital plan. It appears DOD has never had a performance plan to help manage this area. Our prior work has shown that key elements of a sound management approach contain performance plans that include establishing implementation goals and time frames, performance measures, and activities that are aligned with resources. Without such a plan, DOD and its components may not be able to design and fund the best strategies to address the legislative requirements and meet their workforce needs. Of the nine requirements stipulated in the FY 2007 NDAA, DOD’s update and related documentation addresses four and partially addresses the remaining five. Table 3 summarizes the legislative requirements and identifies the extent to which the civilian human capital strategic plan update addresses the requirements. Although DOD recently established, in October 2008, an executive management office responsible for talent management, succession planning, and other issues, this office is operating without a performance plan that establishes implementation goals and time frames, measures performance, and aligns activities with resources. DOD’s 2008 update addresses four requirements of the FY 2007 NDAA— specifically, a plan of action that identifies (1) legislative or administrative actions needed, (2) changes in the number of personnel authorized, (3) changes in the rates or methods of pay, and (4) specific steps DOD has identified to ensure compliance with section 129 of title 10, United States Code. At the time of our report, DOD officials said that the department has not determined whether additional legislative actions are needed. DOD’s update, however, identifies the issuance of DOD Directive 1403.03, which established the policy for competency requirements and other requirements for the management of the career life cycle of senior executives. In addition, DOD Instruction 1400.25, issued in November 2008, established a competency-based approach to manage the life cycle of senior executive personnel from accession through separation. DOD’s update notes that the department has requested an increase in the number of Defense Intelligence Senior Executive Service personnel allowed under section 1606a of title 10. Specifically, the update noted that the department required an additional 100 allocations for Defense Intelligence Senior Executive Service personnel in the following agencies: Defense Intelligence Agency, National Geospatial-Intelligence Agency, Army, Air Force, Navy, Marine Corps, Defense Security Service, and Office of the Under Secretary of Defense for Intelligence. Although the update states that this change will allow critical mission requirements to be met, we did not conduct a review of the department’s analysis to determine its validity. DOD’s update describes the finalization of a common Senior Executive Service position tier structure, which creates a framework for determining comparability in the management and compensation of executive positions. The update also includes new sourcing methods to fill positions within each tier using component talent management processes, which includes identifying candidates across the department who, based on annual talent reviews, have been identified as ready for an enterprise senior executive position. DOD’s update describes a DOD Instruction that is being developed that is intended to address how DOD manages and allocates resources based on mission requirements, workload, and prescribed performance objectives, as prescribed by 10 U.S.C. §129. Specifically, section 129 of title 10 states that the civilian personnel of DOD will be managed each fiscal year solely on the basis of and consistent with (1) the workload required to carry out the functions and activities of the department and (2) the funds made available to the department for such fiscal year. According to DOD’s update, the new instruction will explain the manpower and resources that are allocated and managed to support the strategic objectives, daily operation, and effective and economical administration of the department. Further, where possible, measures of performance will be established as indicators of mission accomplishment and will be regularly monitored by management officials to ensure that budgeted manpower reflects the minimum necessary to achieve program objectives consistent with defense priorities. In addition, the instruction will cover the flexibilities to manage to a requirement and the budget. The update and DOD officials did not, however, give any indication as to when this instruction will be completed. DOD’s 2008 update partially addresses the remaining five requirements of the FY 2007 NDAA. As seen in table 3, these include an assessment of the needs for, the capabilities of, and the gaps in the existing senior leader workforce; and a plan of action that includes specific recruiting and retention goals, along with specific strategies for developing, training, deploying, compensating, motivating, and designing career paths and career opportunities for the senior leader workforce. DOD’s 2008 update notes that the department has not completely addressed this requirement and has ongoing work to do so through a baseline review of senior leadership positions. DOD officials said that the latter will include an assessment of the needs for senior leaders. DOD’s update does, however, identify leadership capabilities needed as part of an overall assessment of the senior leader workforce and includes some competencies developed to address the changing environment in which DOD operates. Specifically, the update identifies the need for senior leaders to assimilate quickly, possess language skills and cultural awareness, understand interagency roles and responsibilities, and have an enterprise-spanning perspective, including knowledge of joint matters and network-centric concepts as new leadership capabilities. The update acknowledges that work on this requirement is ongoing, and a DOD Instruction has been drafted that will clarify, when completed, DOD’s official policy on the development and sustainment of its senior leader workforce. In addition, DOD officials have told us that the department began conducting a baseline review of its senior leader workforce in April 2008, and this review is expected to provide an assessment of the capability of existing executive talent. While work is ongoing, however, DOD’s update provides projected trend data for the senior leader workforce, including retirements and other attrition, projecting that approximately 60 percent of DOD’s senior leader workforce will be eligible to retire within the next 3 years. DOD’s update partially addresses this requirement, and the update acknowledges that work on this requirement will be ongoing until the summer of 2009. Specifically, the update states that DOD conducted initial leadership competency assessments at the Senior Executive Service, manager, and supervisor levels, in 2007, using OPM’s Web-based Federal Competency Assessment Tool for Managers. The update noted that the department identified competency gaps against DOD’s Executive Core Qualifications in areas including: creativity, flexibility, strategic thinking, vision, conflict management, and oral and written communications. In addition, DOD’s subject-matter experts and senior leaders, through qualitative assessments, identified the following gaps: (1) lack of critical transformational leadership skills, (2) lack of enterprisewide approach to managing the talent pipeline for DOD leaders, and (3) the shortfall of excepted-service senior intelligence executives. DOD’s update partially addresses this requirement. For example, the update identifies a 5-year goal for the number of employees in leadership positions and contains projected trends in senior leader workforce gains, accessions, total losses, and retirement over a 7-year period. In addition, DOD is reviewing federal travel entitlements, benefits, and allowances to promulgate policies that attract and retain senior leaders. For example, DOD is reviewing Overseas Benefits Allowances to ensure the allowances are attractive incentives for senior leaders. The update, however, does not identify specific program goals to be achieved through such efforts. In addition, although the update suggests that DOD has tracking measures that relate to recruitment and retention, it does not link specific recruiting and retention goals to program objectives. DOD’s update and the implementation of DOD Directive 1403.03 partially address specific strategies for developing, training, deploying, compensating, motivating, and designing career paths and opportunities, which DOD officials stated are components of talent management and succession planning. The update does not, however, address specific strategies for deploying senior leaders. Developing and Training. DOD has developed the Defense Senior Leader Development Program (DSLDP), which is intended to provide senior leaders with, among other things, targeted individual development, professional military education, and defense-focused leadership seminars. This program is available to a small percentage of the DOD workforce and will replace the Defense Leadership and Management Program (DLAMP). According to DOD officials, DLAMP faced a number of problems, such as lack of involvement of senior leadership in the career path or progression of potential SES candidates, lack of interaction and camaraderie among participants, and no plan for use of employees or progression after graduation. These shortcomings were identified through participant feedback and studies conducted by DOD. Although the development of DSLDP seeks to address some of the challenges that faced DLAMP, some components and defense agencies have indicated they will not use DSLDP because they prefer using their own component or agency programs, which they said are more focused on the unique needs for their specific senior leaders. In addition to DSLDP and other OSD-level leadership programs, the components and defense agencies have other leadership development programs, including but not limited to the Air Force’s Civilian Strategic Leader Program, the Defense Information Systems Agency’s Enterprise Leadership Development Program, and the Army Civilian University. For more information on selected leadership development programs throughout DOD, see appendix II. Designing Career Path and Career Opportunities (Talent Management and Succession Planning). With regard to talent management and succession planning efforts, DOD officials stated that OSD does not conduct departmentwide succession planning for DOD’s senior leader workforce. Nevertheless, DOD issued Directive 1403.03 in October 2007 establishing DOD policy to more effectively manage the career life cycle of DOD’s Senior Executive Service leaders, which specifically covers succession planning for senior executives in the service components and defense agencies. DOD’s 2008 update to its civilian human capital strategic plan states that succession planning efforts are currently being developed. Additionally, according to DOD officials, the executive management office for talent management and succession planning, which was not established until October 2008, will address these issues. This office will provide guidance and tools for the departmentwide talent management programs. For example, OSD is exploring the use of a talent management system that will allow OSD and the components to centralize their talent management efforts in accordance with DOD guidance. Specifically, this guidance requires DOD and the components to coordinate such efforts for the Senior Executive Service workforce. DOD officials noted that until this office is fully operational they will be unable to completely address the legislative requirements and this guidance. Compensating and Motivating Senior Executives. In an effort to address compensation for and motivation of its Senior Executive Service workforce, DOD’s update notes that the department issued a directive-type memorandum on April 28, 2008, establishing a common Tier Policy to help ensure transparency and comparability in the management and compensation of executive positions. Specifically, the DOD tier structure is built upon the principle that DOD senior executive positions vary in terms of effect on mission, level of complexity, span of control, inherent authority, scope and breadth of responsibility, and influence in joint, national security matters. Under the three-tier structure, DOD senior executive positions will be sorted into three tiers based upon position characteristics, with Tier 1 positions generally having less complexity and effect on mission outcomes and Tier 3 positions having significant complexity, effect on mission outcomes, or influence on joint, national security matters. Responsible DOD officials told us that compensation levels within the tier system are common throughout the department. According to DOD officials, however, while the common Tier Policy addresses compensation, the pay overlap of some General Schedule (GS) 15 employees and Senior Executive Service personnel could pose a challenge to recruiting for the Senior Executive Service workforce. Specifically, this overlap of pay, which involved DOD’s Senior Executive Service, some GS-15 federal compensation, and some GS-15 compensation with Washington, D.C., locality, is shown in figure 1. Additionally, we have previously reported on other challenges related to Senior Executive Service compensation regarding pay compression— which occurred when their pay reached the statutory cap. OSD officials acknowledged that, while some work on the legislative requirements and succession planning for its senior management workforce had started, it was not complete, primarily because the newly formed executive management office responsible for talent management, succession planning, and other issues at the OSD level was not established until October 2008. At the time of our review, these officials stated that this new office, like the program management office, did not have and did not plan to have a performance plan that included implementation goals and time frames, performance measures, and activities that are aligned with resources. As noted before, without such a plan, DOD and its components may not be able to design and fund the best strategies to address the legislative requirements and meet their workforce needs. While DOD’s update identified a number of factors that could affect civilian workforce plans, such as the effect of decisions made during BRAC and the conversion of military positions to civilian positions, it did not specifically incorporate strategies to address these factors. Importantly, the department did not consider a factor we previously identified—specifically, the department’s reliance on contractors and the related human capital challenges associated with this reliance. For example, we previously identified the need to develop a civilian workforce strategy to address the extent of contractor use and the appropriate mix of contractors. The greater reliance on contractors requires a critical mass of civilian personnel with the expertise necessary to protect the government’s interest and ensure effective oversight of contractor work. Without considering contractors as a factor in strategic human capital planning, DOD may not have the right number and appropriate mix of federal civilian employees and contractors it needs to accomplish its mission. The update identified a number of factors that could affect the department’s civilian human capital strategic plan. These included the execution of 2005 BRAC-round activities; military-to-civilian position conversions; and the “in-sourcing” requirement in the FY 2008 NDAA— that is, the requirement that certain positions be filled by federal civilian employees rather than contractors. The update, however, did not provide strategies for addressing these factors but stated that strategies were being developed. Specifically regarding BRAC, the update noted that the process has the potential to affect how, when, and where positions are ultimately realigned relative to their original location. It further noted that if employees do not move, open positions and attrition could result, thus increasing the recruiting needs of the department. We have previously reported that implementing hundreds of BRAC actions by the statutory deadline of September 15, 2011, will present a challenge for DOD to realign about 123,000 military and civilian personnel to various installations across the country. The update also noted that the conversions of military positions to civilian positions could also affect the department’s workforce projections. For example, it stated that a key aspect of maintaining the nation’s “All- Volunteer” force was the use of DOD’s military members in only those positions that are military-essential. It stated that since 2004, more than 55,000 military positions have been selected for conversion to civilian status in areas such as healthcare administrators. It further noted that the department needs to consider this potential increase in civilians as it plans and implements its human capital strategies for future years. Additionally, the update noted that the management of the civilian workforce would also be affected by a new in-sourcing law—section 324 of the FY 2008 NDAA. The update mentioned that the department may consider using government employees to perform, among other things, a new mission requirement or an activity performed by a contractor when an economic analysis shows DOD civilian employees are the low-cost providers. The department further noted that the increased use of civilians to accomplish such critical work will put greater demands on its civilian human resources, policies, and practices. We have previously reported a similar factor that has been a primary challenge for DOD—the department’s increasing reliance on contractors. GAO’s body of work has shown that DOD faces long-standing challenges with increased reliance on contractors to perform core missions. These challenges are accentuated in operations such as Iraq, where DOD has lacked adequate numbers of personnel to provide oversight and management of contractors. Key to meeting this primary challenge is developing workforce strategies that consider the extent to which contractors should be used and the appropriate mix of contractor and federal personnel. In 2003, we recommended that DOD develop a human capital strategic plan that considers contractor roles and the mix of federal civilian and contractor employees in these plans. DOD did not concur with this recommendation, at the time, noting that the use of contractors was just another tool to accomplish the department’s mission and was not a separate workforce with separate needs to manage. However, we noted that strategic planning for the civilian workforce should be undertaken in the context of the “total force,” including contractors. The 2006 QDR and DOD’s 2008 update recognize contractors as part of DOD’s total workforce. We continue to believe that, without strategies that address DOD’s reliance on contractors—a key part of DOD’s workforce—the department may not have the right people, in the right place, at the right time, and at a reasonable cost to achieve its mission in the future. According to DOD’s projections, it is possible that the department could be faced, within the next few years, with replacing over 300,000 civilian employees. With the change in administration, the roles of these civilians are of particular importance because of the institutional knowledge they possess, as the military rotates, and as political appointees change. Also, it becomes imperative that DOD strategically manage this workforce to ensure resources are used effectively. While DOD has made good progress in developing its civilian human capital strategic plan, the recent update remains incomplete. For example, the update does not assess gaps in all of the enterprisewide mission-critical occupations identified by DOD. Also not included are strategies for addressing factors like BRAC and DOD’s reliance on contractors. DOD’s human capital strategic plan may not be as useful as it could be to ensure that DOD has the right number of people with the right skills to accomplish the department’s mission. DOD is moving forward in making operational the two management offices it established in the fall of 2008—one to shape and monitor DOD’s updated plans and the other to address, among other things, talent management and succession planning for the senior leader workforce. However, this progress comes without performance plans to help guide and gauge how the department is achieving its objective, which we have previously reported is a key element of a sound management approach. To continue the progress DOD has made with its human capital strategic planning efforts, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to take the following three actions: task the newly established program management office, which is responsible for addressing the requirements of the FY 2006 NDAA, to develop a performance plan that includes establishing implementation goals and time frames, measuring performance, and aligning activities with resources; task the newly established executive management office, which is responsible for addressing the requirements of the FY 2007 NDAA, to develop a performance plan that includes establishing implementation goals and time frames, measuring performance, and aligning activities with resources; and incorporate, in future updates to its strategic human capital plan, strategies for addressing factors that could significantly affect DOD’s civilian workforce plans—including contractor roles and the effect contractors have on requirements for DOD’s civilian workforce. In commenting on a draft of our report, the Acting Under Secretary of Defense for Personnel and Readiness partially concurred with our three recommendations. DOD’s comments are reprinted in appendix III. DOD also provided technical comments on our draft report, which we incorporated, as appropriate into the report. In written comments, DOD stated that the department had made great progress in its strategic human capital plan implementation—from its institutionalization in the department’s philosophy to the actual conduct of workforce forecasting and competency assessments. The comments further stated that the department was disappointed that the complexity of its undertaking and accomplishments were not fully acknowledged in our report and trusted that this could be corrected in the final report. Our review was structured to assess the extent to which DOD’s update addressed the FY 2006 and FY 2007 NDAA requirements and key factors that may affect civilian workforce planning. We note that our report acknowledges that DOD has made progress in addressing the FY 2006 NDAA requirements when compared with its first strategic human capital plan. Specifically, our report shows that the initial plan did not meet most of the statutory requirements, while the update partially addressed each requirement. We also noted some of DOD’s accomplishments—including issuance of a DOD Instruction on strategic human capital management and training of component representatives on the OPM forecasting tool. However, our report also identified those areas where DOD has further work to do to enhance its civilian human capital strategic plan. DOD partially concurred with our recommendation that the newly established program management office, which is responsible for addressing the requirements of the FY 2006 NDAA, develop a performance plan that includes establishing implementation goals and time frames, measuring performance, and aligning activities with resources. The department noted that our report said DOD does not have and does not plan to have a performance plan or road map for its newly formed civilian workforce readiness program office and that this statement was not correct. It further noted that, at the time of our review, the newly formed program office was only a couple of months old and that the Under Secretary of Defense for Personnel and Readiness and the Deputy Under Secretary of Defense for Civilian Personnel Policy had required the new office to develop both a performance plan and a road map—and that these efforts were in progress. We disagree. To the contrary, DOD officials did not provide any specific documentation from OUSD(P&R) or Civilian Personnel Policy requiring the new office to develop such plans, when asked about this plan. In fact, we were told that the department did not have a performance plan and that the Civilian Personnel Policy office, which had responsibility for the new program management office, normally does not produce such documents. We were further told that, essentially, any overall plan for the new office was scattered through several documents—including position descriptions, budget requests, and briefings to senior leadership. DOD also stated that, at the time of our review, the establishment of the civilian readiness office was only a couple of months old and its staffing was ongoing. We note however, that another office, per DOD officials, had been addressing the FY 2006 NDAA requirements and that DOD did not provide us with a performance plan for that office either. After reviewing DOD’s comments, we asked for additional documentation to support its statement that the Under Secretary of Defense for Personnel and Readiness and the Deputy Under Secretary of Defense for Civilian Personnel Policy had required the development of a performance plan. DOD officials told us that while they have drafted a performance plan, they were unable to provide a copy because it is currently under review. In light of these circumstances, we believe it is imperative that DOD have a performance plan that provides additional guidance and measures to assess the extent to which the program management office is addressing the requirements of the FY 2006 NDAA. DOD also partially concurred with the recommendation to task the newly established executive management office, which is responsible for addressing the requirements of the FY 2007 NDAA, to develop a performance plan that includes establishing implementation goals and timeframes, measuring performance, and aligning activities with resources. The department stated that, at the time of our review, the executive management office was only a couple of months old. It further noted that OUSD(P&R) and the Deputy Under Secretary of Defense (Civilian Personnel Policy) had required the new office to develop both a performance plan (which measures performance and aligns activities to resources) and a road map (with implementation goals and timeframes). The department also noted that development of these documents was in progress. Again, DOD officials did not mention or provide GAO with any specific documentation from OUSD(P&R) or Civilian Personnel Policy requiring the new office to develop such plans. These actions, if performed, appear consistent with the intent of our recommendation to develop a performance plan that provides additional guidance and measures to assess the extent to which the executive management office is addressing the requirements of the FY 2007 NDAA. DOD partially concurred with our recommendation to incorporate in future updates to its strategic human capital plan, strategies for addressing factors that could significantly affect DOD’s civilian workforce plans— including contractor roles and responsibilities and the effect the use of contractors has on requirements for DOD’s civilian workforce. The department stated that it has strategies in place to address recruitment and retention needs arising from factors affecting the DOD workforce, and that it will more closely align these strategies to the causal factors so the linkage is clearly evident. We believe these actions, once implemented, may meet the intent of our recommendations. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the extent to which the Department of Defense’s (DOD) 2008 update to its civilian human capital strategic plan addresses the statutory requirements established in section 1122 of the National Defense Authorization Act for Fiscal Year 2006 (FY 2006 NDAA) and section 1102 of the National Defense Authorization Act for Fiscal Year 2007 (FY 2007 NDAA), we obtained and reviewed DOD’s May 2008 update. This document was approximately 400 pages and was titled “Implementation Report for the DOD Civilian Human Capital Strategic Plan.” We developed a checklist based on the FY 2006 NDAA and FY 2007 NDAA legislative requirements, which enabled us to compare the requirements to DOD’s updated plan. Two analysts independently assessed the DOD update using the checklist and assigned a rating to each of the elements from one of three potential ratings: “addresses,” “partially addresses,” or “does not address.” According to our methodology, a rating of “addresses” was assigned if all elements of a legislative requirement were cited, even if specificity and details could be improved upon. Within our designation of “partially addresses,” there was a wide variation between an assessment or plan of action that includes most of the elements of a legislative requirement and an assessment or plan of action that includes few of the elements of a legislative requirement. A rating of “does not address” was assigned when elements of a characteristic were not explicitly cited or discussed or any implicit references were either too vague or too general to be useful. Independent assessments between the two analysts were in agreement in the majority of the cases. When different initial ratings were given by the two analysts, they met to discuss and resolve differences in their respective checklist analyses and a senior analyst validated the results. On the basis of those discussions a consolidated final checklist was developed for both of the NDAAs. We did not assess the reliability of the data in DOD’s workforce assessments and gap analyses; however, we have previously reported information on the workforce forecasting system used by DOD. In addition, we interviewed officials at the Office of Personnel Management (OPM) to obtain updated information on the workforce forecasting systems DOD used to assess its civilian workforce and ascertained that the data were sufficiently reliable for the purposes of our review. We also interviewed officials in DOD offices for Civilian Personnel Policy (CPP), the Civilian Personnel Management Service, the Army, the Air Force, the Navy, the Defense Information Systems Agency, and the Defense Logistics Agency about the update and ongoing human capital efforts within DOD. We also discussed DOD’s ongoing efforts to establish a program management office that is responsible for, among other things, monitoring and reviewing overall civilian workforce trends, competency assessments, and gap analyses. Additionally, we talked with officials responsible for standing up the separate talent management offices within the components and defense agencies. These offices will coordinate talent management efforts with the Office of the Secretary of Defense. To determine DOD’s succession planning efforts for its Senior Executive Service workforce, we analyzed applicable documents related to DOD’s current efforts, along with our prior work on DOD’s human capital planning efforts for senior executives. We also interviewed officials in DOD’s offices for CPP, the individual services, and the components about these matters. Among other things, we discussed the department’s efforts to establish an executive management office for talent management and succession planning at the OSD level. Finally, we identified and reviewed factors that may affect DOD’s civilian workforce planning such as those that DOD identified in its update. We also analyzed prior GAO reports examining other human capital challenges within DOD related to the department’s reliance on contractors and discussed these matters with DOD and service officials. We conducted this performance audit from February 2008 to February 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In 1997, in response to recommendations from the Commission on Roles and Missions of the Armed Forces, the Department of Defense (DOD) created its Defense Leadership and Management Program (DLAMP). This is a program aimed at preparing civilian employees for key leadership positions throughout the department. DOD refers to DLAMP as a systematic program of “joint” civilian leader training, education, and development that provides the framework for developing civilians with a DOD-wide capability, substantive knowledge of the national security mission, and strong leadership and management skills. Between 1997 and 2006, 1,894 participants were admitted to DLAMP, of which 1,132 completed senior-level professional military education, 480 graduated, 470 remained in the program, and 187 were selected for Senior Executive Service positions. Based on feedback from the components and program participants, DOD made modifications to the program and decided to end DLAMP in its current form in 2010. The modifications to DLAMP resulted in it being transitioned into a new program called Defense Senior Leader Development Program (DSLDP). The modified approach in DSLDP focuses on developing senior civilian leaders to excel in the 21st century’s joint, interagency, and multinational environment. DSLDP supports the governmentwide effort to foster interagency cooperation and information sharing by providing opportunities to understand and experience, first-hand, the issues and challenges facing leaders across DOD and the broader national security arena. Table 4 shows the differences between DOD’s DLAMP and DSLDP programs. For example, DLAMP was a self-paced program and DSLDP uses a cohort-based approach. In addition to DLAMP, DOD has revamped its Executive Leadership Development Program (ELDP), another program at the Office of the Secretary of Defense level designed to develop a pipeline of high-potential future leaders. The ELDP is a 10-month program for GS-12- to GS-14- equivalent civilian personnel. According to DOD, ELDP provides participants with exposure to roles and missions of the entire department and fosters an increasing understanding of today’s warfighter. Some of the individual components and fourth estate agencies have their own senior leader development programs, which are comparable to DLAMP and DSLDP. Below is a sample of the additional programs available to DOD civilians: The Air Force’s Civilian Strategic Leader Program (CSLP), which is designed to provide senior civilian leaders the career management and development necessary to put them on par with similar general officers in the Air Force. The intent of the CSLP process is to build a corps of civilian personnel within the Air Force that have the potential to progress into the Senior Executive Service. The Defense Information Security Agency’s (DISA) Enterprise Leadership Development Program provides leadership development and training to its senior executives. The program focuses on its GS-13 to GS-15 civilian employees with leadership potential. DISA also has a program called the Emerging Leaders Program, which focuses on GS-9 through GS-12 civilian employees. The Army Civilian University has been established to oversee and fully integrate an enterprise approach to education for civilians in support of the U.S. Army Training and Doctrine Command (TRADOC). The university uses an integrated and TRADOC-complementary curriculum with a more standardized, competency-based approach to civilian education, training and leader development initiatives. The Army has also established the Army Senior Fellows Program to build a bench of future Army senior executives who are innovative, adaptive, interchangeable civilian leaders. This program is designed to (1) identify high-potential GS-14 and GS-15 employees through an Army Secretariat Board selection and (2) provide the employees with executive experience assignments and educational opportunities. The Defense Logistics Agency launched a new Enterprise Leader Development Program in fiscal year 2007 for supervisors and managers who hold critical leadership positions. The objective of this program is to increase participants’ proficiency in six critical leadership competencies: integrity/honesty, leading people, external awareness, strategic thinking, executive-level communication, and human capital management. The following are GAO’s comments on specific sections in the Department of Defense’s (DOD) letter sent on January 23, 2009. The specific sections are entitled, “The following additional comments are provided regarding the GAO report” and “DEPARTMENT OF DEFENSE GENERAL COMMENTS.” 1. DOD states that contractor issues were not a part of our interviews and fact finding for this review. However, as identified in the DOD notification letter, other challenges and emerging issues facing DOD’s Senior Executive Service (SES) workforce in the human capital area was a key question included in this review. Our approach included reviewing our prior work, analyzing DOD’s update, interviewing OSD and component officials about these issues, and discussing our potential findings with them. These officials noted that contractor reliance was a major challenge for the department, and we noted that the DOD 2008 update did not mention this as one of the challenges and, thus, did not provide a strategy. As noted in our report we assessed the extent to which the update addressed key factors like the reliance on contractors. 2. DOD states that our report mentions the need for increased contract oversight and that this issue, again, was not part of our interviews or fact-finding. See comment 1. 3. DOD stated that our review was bifurcated between an independent GAO review of the defense acquisition workforce section and a review of the remaining part of the update to DOD’s plan. This is not correct. This review focused on how DOD’s 2008 update submitted to Congress addressed the Fiscal Year (FY) 2006 and 2007 National Defense Authorization Act (NDAA) and key factors that could affect civilian human capital planning. According to section 851 of the FY 2008 NDAA, DOD was required to include a section on the defense acquisition workforce planning efforts in its 2008 update, but it did not. This review focused on DOD’s 2008 update. A separate GAO review is looking at, among other things, the defense acquisition workforce requirements in the FY 2008 NDAA. 4. DOD stated that we did not address the department’s efforts to institutionalize strategic human capital management planning. We disagree. As we state in the report, the objectives for our review were to assess the extent to which DOD’s update addressed the FY 2006 and FY 2007 NDAA requirements and key factors that may affect civilian workforce planning and our report was, therefore, structured accordingly. Our report did, however, note some of DOD’s efforts— including issuance of a DOD Instruction on Strategic Human Capital Management and training for component representatives on the Office of Personnel Management’s forecasting tool. 5. DOD’s comments provided a list of institutionalized efforts that included issuance of a DOD Strategic Human Capital Management instruction. This information is referenced in our report. 6. DOD’s comments provided a list of institutionalized efforts that mentioned training of component representatives on the Office of Personnel Management forecasting tool. This information is referenced in our report. 7. DOD’s comments provided a list of institutionalized efforts that mentioned formulation, submission, and authorization of a budget for the strategic human capital management program office. We added some of this information to our report. 8. DOD’s comments noted that our report said that more than 50 percent of the DOD civilian workforce is eligible to retire in the next few years and noted that this statement was correct but misleading because the figure included optional and early retirement. We have revised our report accordingly. 9. DOD’s comments said that our assertion that the department’s forecasting was for a 7-year period and not a 10-year period is correct; however, the department believes that a 7-year forecast is valid and should be acceptable because it mirrors DOD’s budget planning cycle. We provided DOD’s perspective in our report but note that the FY 2006 NDAA requires a 10-year forecast. 10. The department noted that our report said DOD does not have and does not plan to have a performance plan or road map for its newly formed civilian workforce readiness program office and that this statement was not correct. It further noted that, at the time of our review, the newly formed program office was only a couple of months old, its staffing was still in progress, and there would definitely be both a performance plan and a road map for the office. The department stated that these were just not complete at the time of the GAO engagement. We disagree. To the contrary, DOD did not provide us with any specific documentation that a performance plan was in progress during our review. In fact, we were told that the department did not have a performance plan and the Civilian Personnel Policy office, which has responsibility for the new program management office, normally does not produce such documents. We were further told that, essentially, any overall plan for the new office was scattered through several documents—including position descriptions, budget requests, and briefings to senior leadership. 11. DOD stated that our report discussed DOD’s mix of contractors and civilians and this was not discussed during our interviews. We disagree. See comment 1. 12. DOD’s comments state that our report discussed the department’s reliance on contractors and this was not raised in discussions with DOD. We disagree. See comment 1. 13. DOD’s comments assert that our report incorrectly states that the department did forecasting only for eleven mission-critical occupations and this was not correct. We have revised the report accordingly. 14. DOD states that our report indicates that the department did a gap analysis for about half of its 25 enterprisewide mission-critical occupations, but it was not clear to what gap analyses this was referring. It further noted that, if the report was referring to a competency gap assessment, it was misleading and noted that the update had discussed competency assessments on pages 2-15 through 2-25 of its update. We note that our report states that 11 gap assessments were done with the forecasting tool---10 of which DOD identified as “steady state” and one with an actual gap. We further noted that the update also discussed other gap assessments for six enterprisewide mission-critical occupations. We note that only one of these was not previously identified as one of the 11 mission-critical occupations with gap assessments—this was the computer science mission-critical occupation. We also clarified that competency gap assessments were done at the component levels and provided examples in the body of our report. 15. DOD states that our report indicates DOD’s update partially addressed a plan of action to develop and reshape the civilian workforce and note that while its recruitment, retention and development activities did not focus solely on its mission-critical occupations, the strategies are widespread and cover most of the department’s occupations. We note, however, that the law required the plan to address identified gaps in its “critical” skills and competencies or what DOD has identified as enterprisewide mission-critical occupations. 16. While DOD acknowledged our report correctly stated that its update contained appendixes on 12 mission-critical occupations, it believed that we did not reflect the totality of the Departments efforts because the acquisition workforce section was not considered in the GAO assessment. As stated in comment 3, information from the section on defense acquisition workforce planning was not included in our report because it was not completed during the course of our review. 17. DOD’s comments stated that the acquisition community had conducted a human capital analysis and undertaken initiatives to strengthen this workforce and these should be included in the GAO report. We disagree. See comment 3. 18. DOD stated that chapter 3 of its update addressed recruitment and development strategies to meet DOD civilian workforce needs—noting that strategies were key to ensuring the successful conversion of military positions to civilians and readying a supply of candidates to meet in-sourcing requirements. Accordingly, the department noted that it believed it indeed had strategies in place to address emergent recruitment needs. We note however that the introduction and executive summary of the update noted several factors we discussed as challenges and stated that strategies, at the time of our review, were being developed—and, as stated previously, contractor reliance was not identified as a challenge in the update. In addition to the contact name above, Marion Gatling, Assistant Director; Andrew Curry; Michael Hanson; Mae Jones; Amber Lopez; Lonnie McAllister; Brian Pegram; Charlie Perdue; Terry Richardson; and Nicole Volchko made major contributions to this report. Human Capital: Diversity in the Federal SES and Processes for Selecting New Executives. GAO-09-110. Washington, D.C.: November 26, 2008. Ensuring a Continuing Focus on Implementing Effective Human Capital Strategies. GAO-09-234CG. Washington, D.C.: November 21, 2008. Results-Oriented Management: Opportunities Exist for Refining the Oversight and Implementation of the Senior Executive Performance- Based Pay System. GAO-09-82. Washington, D.C.: November 21, 2008. Department of Homeland Security: A Strategic Approach Is Needed to Better Ensure the Acquisition Workforce Can Meet Mission Needs. GAO-09-30. Washington, D.C.: November 19, 2008. Human Capital: DOD Needs to Improve Implementation of and Address Employee Concerns about Its National Security Personnel System. GAO-08-773. Washington, D.C.: September 10, 2008. Human Capital: Selected Agencies Have Implemented Key Features of Their Senior Executive Performance-Based Pay Systems, but Refinements Are Needed. GAO-08-1019T. Washington, D.C.: July 22, 2008. Centers for Disease Control and Prevention: Human Capital Planning Has Improved, but Strategic View of Contractor Workforce Is Needed. GAO-08-582. Washington, D.C.: May 28, 2008. Human Capital: Workforce Diversity Governmentwide and at the Department of Homeland Security. GAO-08-815T. Washington, D.C.: May 21, 2008. Human Capital: Transforming Federal Recruiting and Hiring Efforts. GAO-08-762T. Washington, D.C. May 8, 2008. Human Capital: Corps of Engineers Needs to Update Its Workforce Planning Process to More Effectively Address Its Current and Future Workforce Needs. GAO-08-596. Washington, D.C.: May 7, 2008. Older Workers: Federal Agencies Face Challenges, but Have Opportunities to Hire and Retain Experienced Employees. GAO-08-630T. Washington, D.C.: April 30, 2008. Human Capital: Diversity in the Federal SES and Senior Levels of the U.S. Postal Service and Processes for Selecting New Executives. GAO-08-609T. Washington, D.C.: April 3, 2008. Defense Contracting: Army Case Study Delineates Concerns with Use of Contractors as Contract Specialists. GAO-08-360. Washington, D.C.: March 26, 2008. The Department of Defense’s Civilian Human Capital Strategic Plan Does Not Meet Most Statutory Requirements. GAO-08-439R. Washington, D.C.: February 6, 2008. Defense Acquisitions: DOD’s Increased Reliance on Service Contractors Exacerbates Long-standing Challenges. GAO-08-621T. Washington, D.C.: January 23, 2008. Federal Acquisition: Oversight Plan Needed to Help Implement Acquisition Advisory Panel Recommendations. GAO-08-160. Washington, D.C.: December 20, 2007. DOD Civilian Personnel: Medical Policies for Deployed DOD Federal Civilians and Associated Compensation for Those Deployed. GAO-07-1235T. Washington, D.C.: September 18, 2007. Human Capital: DOD Needs Better Internal Controls and Visibility over Costs for Implementing Its National Security Personnel System. GAO-07-851. Washington, D.C.: July 16, 2007. Human Capital: Retirements and Anticipated New Reactor Applications Will Challenge NRC’s Workforce. GAO-07-105. Washington, D.C.: January 17, 2007. Defense Acquisitions: Tailored Approach Needed to Improve Service Acquisition Outcomes. GAO-07-20. Washington, D.C.: November 9, 2006. Human Capital: DOD’s National Security Personnel System Faces Implementation Challenges. GAO-05-730. Washington, D.C.: July 14, 2005. Human Capital: Preliminary Observations on Proposed DOD National Security Personnel System Regulations. GAO-05-432T. Washington, D.C.: March 15, 2005. DOD Civilian Personnel: Comprehensive Strategic Workforce Plans Needed. GAO-04-753. Washington, D.C.: June 30, 2004. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March, 2004. Information Technology Management: Governmentwide Strategic Planning, Performance Measurement, and Investment Management Can Be Further Improved. GAO-04-49. Washington, D.C.: January 12, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. Human Capital: Insights for U.S. Agencies from Other Countries’ Succession Planning and Management Initiatives. GAO-03-914. Washington, D.C.: September 15, 2003. DOD Personnel: Documentation of the Army’s Civilian Workforce- Planning Model Needed to Enhance Credibility. GAO-03-1046. Washington, D.C.: August 22, 2003. DOD Personnel: DOD Comments on GAO’s Report on DOD’s Civilian Human Capital Strategic Planning. GAO-03-690R. Washington, D.C.: April 18, 2003. | Having the right number of civilian personnel with the right skills is critical to achieving the Department of Defense's (DOD) mission. With more than 50 percent of its civilian workforce (about 700,000 civilians) eligible to retire in the next few years, DOD may be faced with deciding how to fill numerous mission-critical positions--some involving senior leadership. The National Defense Authorization Act (NDAA) for Fiscal Year (FY) 2006 requires DOD to develop a strategic human capital plan, update it annually through 2010, and address eight requirements. GAO previously found that DOD's 2007 plan did not meet most requirements. The 2007 NDAA added nine requirements to the annual update to shape DOD's senior leader workforce. GAO was asked to assess the extent to which DOD's 2008 update addressed (1) the 2006 human capital planning requirements, (2) the 2007 senior leader requirements, and (3) key factors that may affect civilian workforce planning. GAO analyzed the update, compared it with the requirements, and reviewed factors identified in the update and prior GAO work. While DOD's 2008 update to its strategic human capital plan, when compared with the first plan, shows progress in addressing the FY 2006 NDAA requirements, the update only partially addresses each of the act's requirements. For example, DOD identified 25 critical skills and competencies--referred to as enterprisewide mission-critical occupations, which included logistics management and medical occupations. The update, however, does not contain assessments for over half of the 25 occupations, and the completed assessments of future enterprisewide mission-critical occupations do not cover the required 10-year period. Also, DOD's update included analyses of "gaps," or differences between the existing and future workforce for about half of the 25 occupations. Finally, DOD's update partially addressed the act's requirements for a plan of action for closing the gaps in DOD's civilian workforce. Although DOD recently established a program management office whose responsibility is to monitor DOD's updates to the strategic human capital plan, the office does not have and does not plan to have a performance plan--a road map--that articulates how the NDAA requirements will be met. Until such a plan is developed, DOD may not be able to design the best strategies to address the legislative requirements and meet its civilian workforce needs. DOD's 2008 update and related documentation address four of the nine requirements in the FY 2007 NDAA for DOD's senior leader workforce and partially address the remaining five. For example, the update identifies a plan of action to address, among other things, changes in the number of authorized senior leaders. However, the update noted that DOD had conducted only initial leadership assessments as a first step in identifying some of its needs, capabilities, and gaps in the existing or projected senior leader workforce and stated that the final assessments would not be completed until the summer of 2009. Although DOD recently established an executive management office to manage the career life cycle of DOD senior leaders, as well as the FY 2007 NDAA requirements, this office has not and does not plan to develop a performance plan to address the NDAA-related requirements. While DOD's 2008 update identified some key factors that could affect civilian workforce plans, such as base closures and legislation requiring the use of government employees for certain functions, it does not include strategies for addressing these factors. For example, the update noted that DOD may consider using government employees to perform, among other things, an activity performed by a contractor when an economic analysis shows DOD civilian employees are the low-cost providers, but DOD does not provide a strategy for doing so. Further, GAO's body of work has noted a similar factor not discussed in DOD's update--DOD's extensive reliance on contractors and its long-standing challenges in developing a civilian workforce strategy to address the use of contractors and the appropriate mix of contractors and civilians. Without strategies that address key factors like the use of contractors, DOD may not have the right number of people, in the right place, at the right time, and at a reasonable cost to achieve its mission in the future. |
The general well-being of children and families is a critical national policy goal. Current priorities are aimed at protecting children and preserving families, including meeting the needs of millions of parents who annually seek child support for their eligible children. However, when noncustodial parents fail to provide financial support, millions of children must rely on welfare programs. In 1995, over 9 million of the 13.6 million people receiving benefits from the Aid to Families With Dependent Children (AFDC) program were children. The Congress created the national Child Support Enforcement Program in 1975 as title IV-D of the Social Security Act. This intergovernmental program involves federal, state, and local governments. The Department of Health and Human Services’ (HHS) regional office staff and the Office of Child Support Enforcement (OCSE) oversee the state-administered programs. The purpose of the program is to increase collections from noncustodial parents and reduce federal, state, and local welfare expenditures. As shown in figure 1.1, reported collections in fiscal year 1995 were 80 percent higher than they were in 1990. Total collections in billions of dollars The number of reported child support cases has also increased 60 percent—from 13 to 20 million cases over that same time period. As a result, according to HHS, the number of cases in which collections are being made has remained about 18 to 20 percent. Families entering the Child Support Enforcement Program require different combinations of services at different times, and child support enforcement agencies are directly responsible for providing these services. For instance, in some cases the child’s paternity has not been established and the location of the alleged father is unknown. In these cases, the custodial parent needs help with every step: locating the alleged father, establishing paternity, obtaining and enforcing a child support order, and collecting the support payment. In other cases, the custodial parent may already have a child support order; in such a case, the child support enforcement agency must review and possibly modify the order as a result of changes in the employment status or other circumstances of the noncustodial parent before tackling enforcement. State child support enforcement programs are organized in significantly different ways. They report to different state agencies and follow different policies and procedures. In addition, relationships between the state child support enforcement programs and other state agencies differ. These characteristics usually vary by the type of service delivery structure, levels of court involvement required by state family law, population distribution, and other variables. For example, some state child support agencies manage their programs centrally (operating a number of state offices), while others allow the counties or other governmental entities or even private companies to manage the programs locally. Growing caseloads, increased costs, and social demands have given rise to the need to implement expedited processes for establishing and enforcing payment of child support. As such, automation was (and still is) seen by many, including the federal government, as an effective tool for addressing this need. In 1980, the Congress promoted the development of automated systems that could improve the performance of the child support program. Public Law 96-265 authorized the federal government to pay up to 90 percent of the states’ total costs incurred in planning, designing, developing, installing, or enhancing these systems. The systems are required by OCSE to be implemented statewide and be capable of carrying out mandatory functional requirements, including case initiation, case management, financial management, enforcement, security, privacy, and reporting. Incorporating these requirements can help locate noncustodial parents and monitor child support cases. Since 1981, the federal government has spent over $2 billion for automated systems to assist states in collecting child support. The Family Support Act of 1988 mandated that by October 1, 1995, each state have a fully operational automated child support system that meets federal requirements. At that time, the 90-percent development funding was to be discontinued. In addition, if a state did not have its system certified as fully operational by this date, the act declared that the state’s child support program may have its program funding reduced. However, by October 1, 1995, only five states had met the deadline. Therefore, the Congress passed Public Law 104-35, extending the deadline to October 1, 1997. Developing child support enforcement systems is a joint federal and state responsibility. In providing most of the funding for systems, the federal government, through OCSE, is responsible for providing leadership, technical assistance, and standards for effective systems development. OCSE is also responsible for assessing states’ automated systems and ensuring that states are effectively using the 90-percent funding. To receive 90-percent federal funding for the development of an automated child support enforcement system, a state is required to develop and submit an advance planning document (APD) to OCSE, describing its proposed system. The APD is reviewed by OCSE’s Division of Child Support Information Systems and by HHS’ regional, program, and financial management staff to ensure that the proposed system incorporates the minimum functional requirements and will meet federal, state, and user needs in a cost-effective manner. After the APD is approved, OCSE provides 90-percent funding for the project and monitors its progress. Federal regulations and OCSE guidance (1) require states to update their APDs when projects have significant changes in budget or scope and (2) give OCSE the authority to suspend funding if a state’s development does not substantially adhere to its approved plan. When the state considers its system complete, a state official requests that the federal government certify that its system meets requirements. After certification, a state is authorized to receive additional funding to maintain its operational system. While states are still trying to meet the challenges of the 1988 act, they are also faced with newer challenges. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 requires that states implement specific expedited and administrative procedures intended to expand the authority of the state child support agency and improve the efficiency of state child support programs. In order to comply with the expedited processes requirement, states have to meet specific time frames for establishing paternity and establishing and enforcing support orders.Under current law their statewide systems must automatically perform specific locate, establishment, enforcement, and case management functions and maintain financial management, reporting, and under the new law states’ security and privacy functions. In addition, under the new law states must enhance their current statewide systems to electronically interface with other federal and state agencies. This is needed to establish, for example, central case registries and new-hire directories. Therefore, to successfully comply with the welfare reform legislation, it is critical that the states and OCSE have fully operational child support systems in place. In 1992, in response to a request from the Senate Committee on Finance, we reviewed HHS’ oversight of states’ efforts to develop automated child support enforcement systems. In August 1992, we issued a report citing major problems with oversight and monitoring of these development efforts. We reported that while taking timely corrective action on known problems is critical to developing well designed automated systems, OCSE had not required needed changes in some states facing serious systems problems. We, therefore, made recommendations to HHS for improvement. On June 20, 1996, Representative Henry J. Hyde requested that we conduct a follow-up review. Later, Representative Lynn C. Woolsey joined in this request. Our specific objectives were to determine (1) the status of automated state systems, including costs, (2) whether HHS had implemented our 1992 recommendations, and (3) whether HHS was providing effective federal oversight of state systems development. To accomplish these objectives, we reviewed federal laws and regulations on OCSE’s oversight of state development of automated systems. We assessed OCSE systems guidelines, policies and procedures, and correspondence with the states. We also interviewed officials in OCSE’s Office of Child Support Information Systems and Division of Audit to discuss their continued roles and responsibilities in overseeing the planning, development, and implementation of state child support enforcement systems. To update our knowledge of automated systems issues, we analyzed state planning documents, OCSE certification reports, and state audit reports of automated systems. In addition, we reviewed financial reports produced by OCSE’s statistical and reporting systems; however, we did not independently verify data contained in these reports. We coordinated with the HHS Office of Inspector General and reviewed, analyzed, and summarized the results of its nationwide child support systems state survey. We also interviewed selected contractors developing and implementing child support enforcement systems. Further, we conducted a focus group of 18 state officials, representing 14 states (California, Connecticut, Delaware, Georgia, Iowa, Louisiana, Massachusetts, Michigan, Minnesota, Missouri, Nebraska, New York, Ohio, and Texas) and Los Angeles County to determine the benefits, barriers, and solutions to developing automated child support systems. We performed our work at OCSE headquarters in Washington, D.C. We also surveyed all 10 HHS regional offices and visited 5 (Atlanta, Dallas, Denver, New York, and Philadelphia) to gain an understanding of the history of each state’s development effort and of OCSE’s role in providing regional oversight and technical assistance. Further, we visited six states (Alabama, California, Massachusetts, Ohio, Texas, and Washington) and Los Angeles County. We selected these locations based on the following criteria: levels of funding requested, methods used to develop systems (e.g., in-house, contractor, and combination of in-house and contractor), caseload, geographic location, phase of development (e.g., pilot, implementation, enhancement, in operation), and level of certification. During our site visits we assessed the systems’ status, best practices, and barriers to implementing systems using relevant components of our system assessment methodology. We also reviewed various state and contractor systems-related documents and correspondence and interviewed state agency officials. We conducted this work between August 1996 and March 1997, in accordance with generally accepted government auditing standards. We requested written comments from the Secretary of Health and Human Services or her designee. The Inspector General provided us with written comments, which are discussed in chapter 6 and reprinted in appendix IV. Many states have made progress in their automation projects, and state officials report that the systems have already demonstrated benefits. However, some states’ costs are twice as high as originally estimated, and the extent of final costs is not yet known. Progress in developing systems varies—some states have automated many features, while others are in the earlier phases of development and may not be certified or operational by the October 1, 1997, deadline. According to state program and systems managers, child support enforcement systems have improved program effectiveness and worker productivity by automating inefficient, labor-intensive processes and monitoring program activities. Systems have improved efficiency by automating the manual tasks of preparing legal documents related to support orders and calculating collections and distributions, including interest payments. Further, automated systems can help locate absent parents through interfaces with a number of state and federal databases more efficiently than could the old, manual process. These systems have also improved tracking of paternity establishment and enforcement actions. The following examples show specific reported improvements in program performance since states began developing their automated systems. According to one systems official, while it is difficult to attribute benefits entirely to the system, “the system has changed the way business is done in the child support office.” The automated system assisted the state’s staff in increasing the number of parents located from almost 239,000 in fiscal year 1995 to over 581,000 in fiscal year 1996—a reported increase of 143 percent. Additionally, while using the system from July 1994 through December 1996, the staff increased the number of support orders established by over 78 percent, the number of paternities established by almost 89 percent, and child support collections by almost 13 percent. Officials from another state noted that staff performance increased with the new system because it gives staff a new tool to use to improve their productivity. Collections per employee have more than doubled—from about $162,000 to $343,000 annually. The system has also helped the state reduce the time required to process payments: The turnaround time for checks for nonwelfare custodial parents dropped from 29 days to processing a payment in only seconds and issuing checks within 24 hours. Officials from the same state also reported that the automated system allowed cases to be viewed on-line by several individuals simultaneously, eliminating the bottlenecks created by manually searching, retrieving, and delivering hard copy case files. Before the state implemented its new system, 11 percent of the child support staff was dedicated to the manual process of retrieving files. States have spent billions of dollars on automated child support enforcement systems. Costs for developing and operating these systems continue to mount, while progress in developing systems varies. Despite the escalating costs, only 12 systems have been certified, and as many as 14 states may not meet the October 1997 deadline. According to OCSE records, states have spent over $2.6 billion since the early 1980s to develop, operate, maintain, and modify county and statewide automated child support systems. Of these costs, the federal government has paid 66 to 90 percent of states’ systems costs, amounting to more than $2 billion. Since 1980, federal expenditures for child support enforcement systems have risen dramatically. Figure 2.1 shows the history of federal funding for these systems from fiscal year 1981 through fiscal year 1996. As the chart reveals, federal spending escalated as states began working to comply with the 1988 act. Appendix I provides the total reported costs for each state’s child support system and the federal and state shares of those expenditures, and appendix II provides the enhanced and regular federal expenditures. Although the 90-percent enhanced funding ended on October 1, 1995, the Congress later retroactively extended it to October 1, 1997, for those states having approved enhanced funding in their APDs as of September 30, 1995. States generally underestimated the costs of developing and operating child support enforcement systems. During our seven site visits, we compared the projected costs in the original APDs with the most recent estimates. While two sites’ original estimates were fairly accurate, the remaining five were significantly understated. For example, in total, current cost estimates for the states we visited are about twice as high as originally planned. In addition, at least 10 states are now discovering that their systems will cost more to operate once they are completed. While these states expected cost increases as a result of added system functionality, increased information storage, and the use of sophisticated databases, estimated operating costs for some new systems may be even higher than anticipated. For example, one state’s initial estimate showed the new system’s data processing costs would be three to five times higher than that of the old system. However, according to a state official, those costs will likely be six to seven times higher than the current system’s operating costs. Operating that state’s new system may cost nearly $7 million more annually than the old system. Further, costs for developing and operating child support systems have varied greatly among the states—from a low of $1.5 million to a high of $344 million. The difference can be attributed to a variety of factors, including caseload size, whether the states or the counties administer the child support program, the number of attempts states made to develop child support enforcement systems, the way the systems were developed—by modifying an existing system or developing a new one, and the kind of system being developed. Only five systems were certified and seven conditionally certified as of March 31, 1997. OCSE grants full certification when a system meets all functional requirements and conditional certification when the system needs only minor corrections that do not affect statewide operation. Figure 2.2 indicates which states have certified and conditionally certified systems as of March 31, 1997. The certified and conditionally certified states represent only 14 percent of the nation’s reported child support caseload. Further, according to OCSE’s director of state child support information systems, as many as 14 states—6 with caseloads over 500,000—may not have statewide systems that fully meet certification requirements in place by this October 1. These states represent 44 percent of the nation’s child support caseload. OCSE does not yet know how long it will take or how much it will cost to bring these states into compliance with federal requirements. In addition, 2 of these states that chose to update their existing systems rather than develop new child support enforcement systems may need to redesign their systems. Responding to an HHS Office of Inspector General survey, 36 of the 42 states that are not certified reported that they will meet the 1997 deadline. However, this task may well present a challenge for many of them. While almost two-thirds of the states reported that they were either enhancing operational systems to meet certification requirements or in the conversion or implementation phases, the remaining one-third of the states responding to the 1996 survey stated that parts of their systems are only in the design, programming, or testing phases of systems development—with major phases to be completed, including conversion and statewide implementation. State systems officials in our focus group considered conversion to be one of the most difficult problems and a barrier to successful implementation. All too often an organization’s inability to take basic but necessary steps to decrease systems development risks leads to failure. Problems consistently identified in reviews by GAO and others include information systems that do not meet users’ needs, exceed cost estimates, or take significantly longer than expected to complete. In its efforts to assist in the development of automated, statewide child support enforcement systems, OCSE is no exception. The agency did not define requirements promptly, adequately assess systems issues prior to mandating a transfer policy, or seek to identify and aggressively correct problems early in the development process. The lack of sound, timely federal guidance, coupled with some states’ own inadequate systems approaches, caused systems development activities to proceed with increased risks. The federal and state governments and private industry recognize that an investment of time and resources in requirements definition has the biggest program payoff in the development of systems that are on time, cost-effective, and meet the needs of its users. Major systems decisions hinge on baseline requirements; these requirements, therefore, must be defined early. Without them, reasonable estimates of the scope, complexity, cost, and length of a project cannot be adequately developed.In addition, failure to clearly and accurately define requirements may preclude alternatives, restrict competition, and further increase the risk of cost and schedule overruns. According to OCSE’s director of state child support information systems, the agency was expected to develop federal requirements for the statewide systems by October 1990. However, OCSE did not publish federal regulations—which described in general the program and automated systems—until October 1992. The agency did provide draft systems development guidance—functional requirements—to the states, but it was not disseminated until July 1992. OCSE did not provide the states with final systems functional requirements until June 1993. OCSE acknowledges that the federal requirements were late; it attributes this primarily to its not using an incremental approach in releasing the requirements to the states. Rather than issuing certain requirements as they were defined, it waited until all requirements could be issued together. This was done because OCSE believed that certain underlying policy issues needed to be resolved before requirements could be made final and released. In addition, because OCSE’s minimum requirements were extensive and difficult policy issues needed to be resolved, the review and approval process also contributed to the delayed issuance of requirements. According to OCSE, it took time to assess policy issues, determine the most effective way to automate related changes, and accurately define related requirements. For example, according to OCSE, before it could even begin to define requirements related to the replacement of monthly mail-in notices (e.g., of a client’s child support status) with telephone recordings, complex policy decisions had to be considered. Another example of a policy issue needing resolution, according to OCSE, was related to guidance on financial distributions. For instance, OCSE had to assess how systems would handle the required “$50 pass-through” policy of the program. This policy states that the first $50 of current child support payments collected for a child also covered under Aid to Families With Dependent Children (AFDC) must be delivered to the mother of that child rather than to the state AFDC office. While this policy sounds simple, it presented a certain degree of administrative complexity, especially for cases in which support payments were not made on time. In such a case, regardless of how many months a payment has been in arrears, only $50 (for the current month) goes directly to the mother. We have indicated in the past that an agency in the process of defining and analyzing requirements should assess the impact of changes on other organizational elements; therefore, we agree that policy issues such as these must be addressed prior to developing detailed requirements. We also agree with OCSE that where possible, it should have made final and issued certain requirements sooner, using an incremental approach. Delays in issuing final systems functional requirements meant more than just a late start; they compounded other problems. Uncertainties about final requirements slowed some states’ development activities and contributed to contractor problems. Seven of the 10 regions we surveyed indicated that the delay in requirements had an adverse effect on their states’ development. The following excerpts illustrate the impact of the delay on certain state projects. An official of one region (representing four states) indicated that the requirements were issued much later than needed and ranked the “lack of timely final functional requirements” the number one impediment to states’ systems development. This official added that all four of its states “wanted better clarification in black and white as to . . . what the system should look like, how it should operate . . . etc.” The regional official added that its states were “left to figure this out for themselves, then they to pass a certification review that is totally subjective in the areas of functionality and level of automation.” Another regional official said that all five of its states began their projects prior to receiving final requirements; however, they were reluctant to finalize anything until final requirements were issued. Another HHS regional official said, “the delay in getting official regulations published impacted contracts with the vendors and was an embarrassment to ACF [yet, because of the deadline imposed], development efforts went forward.” Still another region surveyed indicated that requirements were somewhat late and, for four of its six states, this was an impediment. However, a regional official alluded to strong contractor relations as one of the primary reasons that the delay was not a problem (but otherwise could have been) for two of the region’s states. The official stated, “the effect of timing on unique. Each was building one system that supported a few offices throughout each jurisdiction. Each also had a fairly good working relationship with its vendor. Because of the nature of the projects, the type of environment and working relationship with the vendor, .” Likewise, during our visits to individual states, four of them attributed their systems development problems to late functional requirements. One state official noted that the late functional requirements and short time frames contributed to development delays and increased costs. Another state noted that the delay in functional requirements contributed to many changes in the contract and, eventually, to contract termination. Finally, in one state, the child support systems’ development contract had to be amended to address modifications to the functional requirements. In fact, three work segments in the contract had to be added as a result of these modifications, increasing project costs by at least $210,000 due to reprogramming. The HHS Office of Inspector General has similarly reported that several states experienced systems problems as a result of late functional requirements. In response to the Office of Inspector General survey, one state official noted that the state’s system was designed to meet the requirements set out in the draft guidelines; once the final requirements were issued, the state had to shift to a new initiative and virtually start over. Another state official said “. . . the late issuance of the certification guide was a major factor in the decision to delay statewide implementation of the system. The delay in receiving the guide caused [the state] to compress the development cycle of subsystem, putting a higher risk on the success of the overall system.” In addition to being hindered by the delay in functional requirements, states encountered delays in developing systems and incurred more costs as a result of OCSE’s policy requiring states to transfer systems. Two years after the passage of the 1988 act, OCSE required states to “transfer” existing child support systems from other states or counties rather than building entirely new systems. While certainly a reasonable approach for saving money, at the time of this policy mandate, only a few available systems had been certified as meeting OCSE’s old requirements and no systems were certified based on the more extensive 1988 act. As a result, states had difficulty transferring these systems and adapting them for their own programs. OCSE had intended for the transfer policy to be an efficient method of building systems; however, in many cases, the transfer policy actually slowed systems development and led to increased systems costs when states attempted to transfer incompatible, incomplete, or inadequately tested systems. Before issuing its transfer policy, OCSE did not perform sufficient analyses to support the requirement that states transfer systems. By not thoroughly evaluating the available alternatives, OCSE had no assurance that states would be able to transfer systems in an effective and efficient manner. As a result, states were faced with choosing from a limited number of systems, some of which were incomplete or unsuitable for their systems environments. OCSE established the transfer policy on October 9, 1990; it said that all states must transfer a child support system from another state or county. OCSE noted that states needed to review other states’ systems and determine how these systems varied from their own systems requirements. According to OCSE, this sharing of technology among states would decrease the installation time for automated systems and reduce the risk of systems failures due to poor system design or inadequate planning. The transfer policy required states to reuse software. Software reuse can be an appropriate part of systems development projects. According to the National Bureau of Standards, one of the most effective means of improving the productivity of software development is to increase the proportion of software that is reused. Reusable software not only increases productivity, but also improves reliability and reduces development time and cost. However, many technical, organizational, and cultural issues usually need to be resolved before widespread reuse of software should be mandated. In this case, this was not done. If properly implemented, OCSE’s transfer policy could have saved states time and money in developing child support systems. As early as 1987, we reported that sharing state systems could save OCSE time and money.However, we also noted at that time that OCSE had not adopted standards or provided adequate oversight of states’ efforts to develop compatible and transferable automated systems. Careful and detailed alternatives analyses are required prior to selecting software to be transferred. Analyses should consider functional requirements; standardization of data elements; compatibility of software and hardware platforms; and other factors, such as caseload processing, organizational structure, state and contractor expertise and skills, and any unique state requirements. For alternatives, state agencies should consider only completed systems that have been tested, validated, and successfully used in operation to ensure that benefits will be achieved. Two factors intensified the need for adequate planning for software reuse: states’ different organizational structures and methods of administering the program and the magnitude of changes required by OCSE’s implementation of the 1988 act. Since states administer the federal child support program, each state determines how its program will be organized and operated. These differences in state programs affect systems development. For example, county-administered states faced an additional challenge in complying with the 1988 requirement to have one statewide child support system. Those states had to build systems that considered the needs of users in all of their county offices, complicating system design. Careful planning for software reuse was especially important. Under the 1988 act and implementing regulations, states were required to obtain and track more detailed information on each absent parent, child, and custodial parent. OCSE mandated that states transfer systems before making the requirements final, so they were unable to first evaluate the ability of the transfer systems to meet those requirements. Despite its lack of requirements, oversight, and alternatives analyses, OCSE mandated the transfer policy without performing sufficient analyses or feasibility studies of existing certified systems as potential transfer candidates. Only eight certified systems were available when the mandate was issued, and these were certified based on the 1984 requirements. At the same time, no automated systems were certified based on the more extensive 1988 act, making it highly unlikely that the available systems would be suitable for transfer to other states. While, in July 1994, OCSE changed its transfer policy making it optional, this was after most states had attempted to transfer systems and when systems efforts had progressed further. Only one state we visited noted that it had successfully transferred another system. It was among the last to transfer a system, initiating the transfer in 1994. Moreover, the state began its project after the final federal requirements were issued, conducted thorough analyses of three potential systems, and transferred a system that had already been certified as meeting the 1988 requirements. In addition, the system selected was the result of a successful, earlier transfer from another location. However, some states we visited did not take as thorough an approach and faced difficulties in attempting to transfer existing child support systems. One project team we visited spent almost $400,000 attempting to transfer a system from another state, only to discover that the transfer was not possible because the system was not compatible with its existing operation. Another state’s official explained that, to meet the mandate and the 1995 deadline, his state attempted to transfer a system that was immature, incomplete, and inadequately tested. While the state started to implement parts of the transfer system, the entire system was not delivered until a year later, increasing project costs. Yet another state attempted to have a contractor modify a transfer system that was also incomplete. Later, the entire effort was abandoned, wasting over a million dollars and contributing to a delay of several years. The HHS Office of Inspector General recently reported that 71 percent of states said that their attempts to transfer a system delayed, rather than enhanced, development of an automated system. In response to that survey, one state official noted, “. . . when we began our project there were no systems certified to the 1988 level. We chose one state’s system as our transfer model and wasted about a year documenting all of its deficiencies in order to justify not transferring it.” In addition, we were told by officials in several other states that they transferred only concepts from other systems—that the amount of computer code actually transferred was negligible. In our 1992 report, we stated that efforts to develop child support enforcement systems were plagued by problems, particularly in the area of federal oversight provided to states. According to laws and regulations, OCSE is responsible for continually reviewing and assessing the planning, design, development, and installation of automated systems to determine whether such systems will meet federal requirements. OCSE is required to monitor 90-percent federally funded child support systems to ensure that they are successfully developed and are cost-effective. If this is not the case and if a state is not substantially adhering to its approved plan, OCSE is authorized to suspend federal funding. Past compliance reviews conducted by OCSE’s systems division identified many deficiencies with states’ development of automated child support systems and escalating costs. For example, development of three severely flawed systems continued at a total cost of over $32 million before they were stopped and redirected by OCSE. Rather than directing needed remedial actions when these problems were identified, OCSE informed the states of the deficiencies yet continued to fund the systems based on states’ assurances that the problems would be addressed. Further, OCSE’s systems division did not routinely use audit division reports to help monitor development because it was not required to do so. In 1992, we recommended that OCSE (1) work with the audit division to identify and resolve systems-related problems, (2) use its authority to suspend federal funding when major problems existed, and (3) require states to implement needed corrective actions when first identified. Despite the seriousness of the problems we identified in 1992 and recurring problems since then, the only recommendation fully implemented by OCSE was the first one, regarding working with the audit division to identify and resolve systems-related problems. OCSE reviews audit reports prior to certification visits and the auditors are now members of the certification review teams. In addition, officials in both the systems and audit areas stated that communication and coordination between the two have improved substantially since our 1992 report. Even though systems costs were escalating, OCSE did not fully implement the other two recommendations. It continues to assert that the federal government should work with the states to correct deficiencies rather than take enforcement actions. However, law and regulations require that OCSE monitor the 90-percent federally funded child support systems to ensure that they are successfully developed. OCSE is authorized to suspend federal funding if a state is not substantially adhering to its approved plan. While HHS regional staff noted that OCSE either held up, reduced, or stopped funding to 18 states since the 1988 act, almost 60 percent of these reported disruptions were due to insufficient information on the required APD or the state’s exceeding its authorized funding level. OCSE believes that suspending funds is counterproductive to helping states meet the deadline. Even when funding was held up for major systems-related problems, efforts to correct these problems did not appear to be timely. For example, one HHS regional official suggested to OCSE that it hold up funding for projects in his region, yet, according to this official, the agency did not stop any funding until one of those projects’ “initial efforts crashed.” In another instance, OCSE had serious concerns about the status and future direction of one state’s project, staffing levels, and methods of incorporating changes in code. As a result, it held up funding for several months. However, some problems were not identified until 2 years into the project’s life cycle; as of the end of our review, OCSE was still working with the state to resolve them. In another case, OCSE identified problems with poorly documented code, inadequate planning and guidance, and contract management but did not stop federal funding. According to OCSE’s systems division director and analysts, states have primary responsibility for developing their systems and therefore the federal government should not assume a primary role in directing how states should develop systems and remedy problems. We disagree with OCSE’s approach of continuing to fund systems with serious problems that endanger the projects’ success. Such an approach involves the risk of needing to fix serious problems later in the development process, when it is much more costly and time-consuming to do so. Because of the complexity, costs, and large caseloads associated with the child support program, effective development of automated systems requires continuing oversight and strong leadership. Yet despite the pressure that federal and state agencies are under to improve their child support enforcement services, the development of statewide automated systems is hampered by ineffective federal leadership and some inadequate state development approaches. Major mechanisms OCSE uses to oversee states’ systems development projects include reviews of the states’ advance planning documents (APDs); advance planning document updates (APDUs); and certification reviews, which are assessments to determine if projects meet federal requirements. While these reviews provide OCSE information on states’ plans for designing automated systems, the agency does not effectively use the APDs to oversee, monitor, or control the systems development projects, and the certifications are performed too late in the process to detect and correct problems. In short, critical systems development decision points are not monitored by OCSE and reviews of states’ systems are primarily focused on determining whether all federal requirements have been met. Further, OCSE has not completed nationwide analyses or post-implementation reviews to effectively assess lessons learned, hindering its ability to provide more thorough, helpful leadership. OCSE acknowledges these facts, yet cites a management approach that holds states responsible for developing their systems; OCSE believes it lacks the technical expertise and resources needed to be involved at critical points in the development process. While the review of APDs is one of OCSE’s principal vehicles for monitoring states’ systems activities, the agency’s review is inadequate for systems monitoring because OCSE does not require a disciplined, structured approach for developing or reviewing systems. As such, the APDs are not used to measure systems development progress at key decision points; rather, according to state officials, APDs are used primarily as a funding approval mechanism only, requiring the states to report information and plans on the basis of a deadline that may or may not be realistic. Consequently, systems problems may go undetected until much later in the process when they are considerably more difficult and expensive to correct. APDs are written plans of action submitted by states to request federal funds for designing, developing, and implementing their systems. According to an OCSE guide, the three primary purposes of the APD process are to (1) describe in broad terms a state’s plan for managing the design, development, implementation, and operation of a system that meets federal, state, and user needs in an efficient, comprehensive, and cost-effective manner, (2) establish system and program performance goals in terms of projected costs and benefits, and (3) secure federal financial participation for the state. The document contains the state’s statement of needs and objectives, requirements analysis, and alternatives analysis. The APD also sets forth the project management plan with a cost-benefit analysis, proposed budget, and prospective cost allocations. To obtain continued federal funding throughout the system’s life, a state submits an APDU to report the system’s status and to request additional funding—annually or, if needed, more frequently. Public Law 100-485 requires that states submit APDs to OCSE and that based on the APDs, OCSE review, approve, and fund information systems. This review focuses on ensuring that each state incorporates the minimum functional requirements by the legislatively mandated date of October 1, 1997. OCSE is also required to review the security requirements, intrastate and interstate interfaces, staff resources, hardware requirements, and the feasibility of the proposed plan. HHS regional staff support OCSE by monitoring states’ development efforts and, at times, assist the states in preparing their APDs and APDUs. In addition, regional staff provide states technical assistance and suggestions to help the states comply with systems requirements. A well-defined, disciplined structure for systems development, covering the status of systems at critical design points, is key to preventing software development problems and encouraging strong, effective management oversight. For example, phases include systems planning and analysis, design, development, and implementation. These are major milestones in any system development project and are to be used to identify risks, assess progress, and identify corrective actions needed before proceeding to the next phase. Appendix III provides an overview of typical systems development phases. The key is to identify system risks and problems early in the design, in order to avoid major failures and abandoned projects, and to help ensure that major segments do not have to be extensively recoded or redesigned. While states recognize the benefits of using a structured systems development approach and may provide OCSE with information on each of these phases in their APDUs, OCSE does not effectively use this information for determining the adequacy or progress of the systems development projects. APDUs are revised annually, rather than corresponding to major system phases. According to OCSE’s director of state child support information systems, the agency has not monitored the projects using a structured approach because it lacks the technical expertise and resources needed to be involved at critical points in the development process. Recent funding from the welfare reform legislation has allowed OCSE to conduct more frequent reviews; it is critical to assess systems development activities at each phase and to identify problems and any potential risks early, before moving forward to the next phase. Further, OCSE has not provided specific guidance describing the information needed in the APDUs to assess different phases of development. For example, while OCSE requires that the states provide schedules of systems development activities, these schedules—formats, descriptions, and structure—vary from state to state and, even within a state, may vary from year to year, making it difficult to effectively assess state progress and monitor systems development. Specifically, some APDUs do not provide information on how much data have been converted, code written, modules produced, nor portions and results of the modules tested, again hindering OCSE’s ability to effectively measure progress. While state officials indicated that the APDUs were useful for budgetary purposes, officials at six out of seven locations we visited said that APDUs are not useful in helping them manage their systems developments. “They are an administrative exercise to justify obtaining funding,” said one. Officials also noted that the deadline seemed to be OCSE’s primary concern; even when the deadline seemed impossible to meet, the states were forced to present inaccurate schedules. Because of the significant financial investment in information systems and their crucial role in helping locate noncustodial parents, a structured systems development approach is essential to reducing major systems risks. In several instances, even when states’ APDUs contained adequate information to identify significant problems in approach, OCSE has not required the states to correct the deficiencies. As a result, more money was spent and systems underwent further development without disciplined, structured systems development approaches, increasing the likelihood of system failures. In the case of one state we visited, as shown in figure 4.1, despite three revisions to time estimates for systems testing, OCSE approved each updated estimate. Only after the state officials told OCSE that delays and problems were occurring did the agency, in November 1996 and in February 1997, review systems progress. State officials told us that they wanted the OCSE review to identify that the time for testing was insufficient to support the state in vendor discussions, yet the agency had not, as of March 31, 1997, reported on this matter. According to OCSE’s director of child support information systems, the review focused only on one specific functional requirement and was not a comprehensive systems review, and would not, therefore, address the software testing issue. In systems development, building and testing sections of computer code and then using test results to refine the software are critical. Testing results need to be considered in correcting errors and improving identified inefficiencies. Despite this essential need for systems testing, in the example in figure 4.1, OCSE continued to approve the APDUs and fund the project, without assessing the increasing risks. The APDUs also showed that the state was not following a structured systems development approach and was developing software while at the same time attempting to integrate software in preparation for systems testing. According to this state’s systems project director, the project’s costs increased from $30 million to over $50 million, in part because of the poor quality of software developed. If work on a later phase proceeds before an earlier phase is completed, one risks problems from condensing work and truncating the process. Completion of the automated system is now uncertain. In another state we visited, OCSE had ample indication that the state’s system was experiencing difficulty, yet failed to act until just recently. In August 1993, the state initially scheduled implementation to last until April 1996—a period of 39 months. The estimated completion date was subsequently extended twice—first in September 1995 by an additional 10 months (to February 1997), and then again in October 1996 by an additional 5 months (to July 1997). These extensions, taken together, prolonged the implementation phase by almost 40 percent, from an initially envisioned 39 months to 54 months. The 1995 updated estimate should have signaled to OCSE that the state’s system needed help. However, not until the state requested a substantial increase—$133 million in project costs—did OCSE, in January 1997, question the state’s systems progress. According to OCSE’s director of child support information systems, the agency should have visited this state sooner to provide technical assistance. Given that the October 1996 update judged implementation to be less than half finished, it is questionable whether the July 1997 target is even realistic. OCSE has not effectively provided program oversight by detecting and redirecting states’ approaches that are inadequate and threaten successful systems development. When planning and developing a system, a state must ensure that it meets users’ needs, provides the intended benefits to users and their constituencies, and is developed on time; otherwise, it will not be effective. Gaining commitment and support from key decisionmakers is particularly important for states developing child support systems because they serve a variety of users—district attorneys, county and state officials, and court officers. Key users’ needs should be considered, and general agreement is critical in developing systems’ requirements and design. However, not all states clearly defined their functional requirements and user needs. In one state, the system design was insufficient because it did not clearly define functional requirements and did not gain the support and involvement of key county officials. While OCSE reviewed requirements and reported minor deficiencies, it did not mention any serious problems in the state’s systems development approach. Yet just 1 year later, after the state had an independent contractor conduct a risk assessment of the project, state officials acknowledged that the approach needed to be completely revamped. According to OCSE’s director of child support information systems, the agency’s review for this state focused only on whether functional requirements were being met rather than the systems approach being followed. She further noted that the state is responsible for developing the system and obtaining system buy-in from the users. Because of OCSE’s large financial investment in systems—the approval of $154 million for this state alone—and the importance of gaining user acceptance for successful systems development, we believe the state and federal agencies are both responsible. Two states we visited also encountered similar problems by not involving all key players—program, systems, and management officials—in the decision-making process. In one state, the information resources management official was not involved in the systems planning and implementation. Later a disagreement occurred in policy decisions related to operating the system, resulting in an abrupt change in management of the system. Consequently, the project’s direction is uncertain and the system conversion progress has been delayed. Another state allowed its child support program office to develop its system without ensuring that key officials with technical program and managerial expertise were adequately involved. Systems problems were identified late in development, and project management was changed to put more emphasis on technical project experience. If this technical expertise had been involved initially, the system might have been more successfully planned and implemented. While OCSE does review and certify states’ systems to determine if functional requirements are being met, as described in the 1988 act and the agency’s implementing regulations, these reviews are conducted toward the end of a systems development project, at the states’ request, and are not performed at critical decision points (such as analysis, design, coding, and testing). Thus, these reviews are too late to identify problems and redirect approaches. Despite being the predominant stakeholder for over $2.6 billion worth of child support systems, OCSE maintains that its philosophy is to work with the states as opposed to suspending funding until problems are corrected, allowing federally financed projects to proceed when such projects do, in fact, require redirection. Two types of reviews are conducted by OCSE: functional and certification reviews. During a functional review, OCSE helps the state work on specific system requirements. For example, it may telephone or visit the state to discuss how to meet automation requirements for the noncustodial parent locating function. The certification review comprises a two-level process, level 1 and level 2. A level 1 review is performed when an automated system is installed and operational in one or more pilot locations. (OCSE created this level of review in 1990 due to state requests for agency guidance prior to statewide implementation.) A level 2 review is conducted when OCSE visits the state to determine if the system meets certification requirements. The certification review normally takes a week to perform; the OCSE review team is made up of headquarters systems officials and HHS regional personnel representing the systems, fiscal, program, and audit functions. Prior to a certification review, OCSE provides a questionnaire including questions on how the system meets specific federally required functions (such as case initiation and the detail supporting this function) and a test deck of financial transactions (mainly test cases of different types of child support payment distributions) to be run on the state’s system. OCSE also supports, through federal funds, child support user group meetings so state officials can meet and share related systems experiences. OCSE’s reviews—functional and certification—of state systems focus on whether functional requirements are met, while they lack but need comprehensive assessments of the systems’ development approaches and schedules. Since 1991, OCSE has visited, assessed, and reported on 31 states’ system development projects. We reviewed all of these and found that 28 of the 31 functional and certification reviews focused primarily on systems’ functional requirements. Specifically, OCSE determines whether the system can initiate a new case, locate an absent parent, establish a support order, manage cases, enforce cases, perform financial distributions, perform management reporting, and maintain security/privacy. Suggestions for improving the efficiency and effectiveness of the systems’ operations are discussed in reports following these reviews. For example, recommendations include having the states consider automating more functions, such as developing an automatic tickler file to remind caseworkers of required actions or an approaching time frame. While suggestions for improved efficiency in automation are valuable, OCSE’s reviews lack a comprehensive assessment of the states’ systems development approach, including the overall design, project management, user involvement, and delays in major milestones and critical tasks. While 3 of the 31 reviews did address systems development approaches and overall project management, OCSE officials noted that they only do this type of review for states that are experiencing delays and significant problems. Even though OCSE required states to submit APDs by October 1, 1991, all but 1 of the 31 functional and certification reviews conducted by OCSE were performed in October 1995 or later. Since OCSE only does certification reviews upon state request and toward the end of systems development, much of the federal funding had already been spent. Even when OCSE identifies state systems problems and notifies the states, corrective actions do not always follow. In one state, OCSE performed two reviews and noted that the state had serious managerial problems. There was no project manager for extended periods of time, and users did not support the project, despite the fact that a key early step in designing a proposed system is identifying and satisfying users’ needs. Even after the second review was performed and the problems noted in the first review had not been corrected, OCSE took no action to stop or delay the project or to suspend funding. Again, according to OCSE systems analysts, the agency’s main focus is to help states fix problems as they arise; agency officials believe that withholding funds is counterproductive to meeting deadlines. As a result, the federal government has approved over $50 million for this one system; it has been in partial operation—primarily in the smaller counties—since mid-1993, and its expected completion is still not known. In another example, OCSE approved a state’s proposal to meet functional requirements by developing a distributed system estimated to cost nearly $178 million to complete and maintain through 2000. This system involves developmental costs for at least 23 county databases, plus additional costs to maintain separate systems. While OCSE questioned the state about the additional costs of personnel, the agency’s certification process does not require that a state receive an approval before moving from one phase of development to the next. By not visiting the project to review critical design documents, the agency was not able to effectively assess the distributed processing strategy and associated costs or to suggest an alternative approach. This state now estimates that the system will cost about $311 million to complete and maintain through 2000. Further, according to state officials, the system will not meet the October 1997 deadline. Systems managers from three states we visited indicated their desire that OCSE play a more active role. Systems that are being developed are very sophisticated, and the officials said that it was important for OCSE to assess the management and direction of the project early to avoid or minimize problems later. One state official said that OCSE should see itself as a stakeholder in the development of these systems and not just a reviewer to see if certain requirements were followed. Another official noted that OCSE is scrupulously hands-off with all, especially with the private sector. Another state official told us that because the contractor’s work was so poor and late in delivery, the state had to support the project with more of its own staff, contributing to an increased cost of $20 million. When the state asked for assistance from OCSE on how to handle the contractor, they were told that “it was a state contract and it had to be resolved at that level.” Further, while a recent HHS Office of Inspector General survey noted that 70 percent of the states felt that OCSE’s guidance was good to excellent—including the certification guide, clarification of requirements, and questions and answers on functional requirements—43 percent said they needed additional technical assistance. One state official said “there has been very little monitoring to date. The delays will come when we request certification and OCSE doesn’t like what it sees.” Another state official noted, “It would have been helpful to have had more compliance reviews and technical reviews of designs, but it was probably not practical given OCSE’s staffing resources.” Finally, another state official noted “I’ve only known OCSE to be the gatekeeper with regard to funding and ultimate certification. I don’t have any experience to see them in another role.” The narrowness of OCSE’s reviews limits its ability to gain a nationwide perspective on the status of states’ systems development; this, in turn, hinders effective leadership in earlier stages. OCSE has completed neither a comprehensive, nationwide analysis nor post-implementation reviews to determine whether state systems are sound financial investments. Such analyses are essential to assessing the ongoing progress of child support systems and evaluating the impact of automated systems on program goals and objectives—including any lessons learned. While the Administration for Children and Families (ACF) developed an automated system to help the agency perform individual state and nationwide analyses, we found the system was not being fully used, was not user friendly, and contained errors in about half of the states’ data. Because OCSE has not conducted a nationwide assessment, it has not analyzed the hardware, software, database structures, and networks supporting state child support systems; as a result, state officials have had to discuss these issues through informal means. If OCSE had performed a nationwide analysis, it would have a sound basis for encouraging states to share innovative database designs, software, and other technologies for greater efficiencies and cost savings. Further, analyses of systems costs, benefits, and schedules from a nationwide perspective would help identify where improvements are needed in a timely manner. Aggregate data on projects help identify recurring problems, successes, and other trends for decision-making purposes. To collect states’ data more effectively, assess systems costs, and monitor projects nationwide, ACF created the State System Approval Information System (SSAIS). This was designed to establish a more accurate way of tracking state systems projects. SSAIS tracks the historical data on automated systems projects—including the child support program—on a state-by-state basis. Users may access the following data on any state project: (1) funding requests, reviews, and approvals, (2) names of contractors, (3) completed systems reviews, and (4) notes from systems reviews. While we commend ACF’s actions to develop this system, we found that SSAIS recorded incorrect funding levels for nearly half of the projects. During our comparison of OCSE’s approval letters and the data in SSAIS, we found errors, such as duplicate entries and entries for systems projects no longer underway, and inconsistencies, such as entries for some states that included planning costs while entries for other states did not. Specifically, SSAIS showed that the states were authorized to spend nearly $1.9 billion on automated child support systems projects, while hard copy approval letters maintained by OCSE indicated that the states were authorized to spend about $100 million less. OCSE’s director of state child support information systems acknowledged that SSAIS is not yet comprehensive, complete, or user friendly. She said that for these reasons, OCSE was not fully using or relying on the SSAIS entries at the time of our review. However, she acknowledged that our review led the division to assign a higher priority to correcting discrepancies in SSAIS and to establishing a consistent policy regarding entries for planning costs. In addition to tracking systems development nationwide, post-implementation reviews are the basic means of ensuring that systems meet program objectives and identify techniques for improving work processes, data integrity, and project management—thereby avoiding costly systems mistakes. This information is also helpful in identifying the benefits of information technology projects and prioritizing technology investments that best meet mission needs. To date, OCSE has not completed post-implementation reviews on any of the 12 certified or conditionally certified child support systems. OCSE’s information systems director called post-implementation reviews important. However, she said that until recently, resource constraints limited OCSE’s ability to complete such reviews. Instead, OCSE focused its resources on assessing the states’ progress in meeting system certification requirements. Under welfare reform, 1 percent of the federal share of child support payments collected annually will be provided for state child support programs and systems oversight. According to OCSE’s information systems director, this will allow OCSE to conduct post-implementation reviews. Recently enacted welfare reform legislation substantially increases the importance of collecting child support payments from noncustodial parents, since for welfare recipients who lose eligibility, child support may be their only remaining source of income. As such, it will be even more important that state automated systems operate correctly and efficiently in helping eligible child support recipients collect funds due them. The law also places specific new requirements on states relating to the functioning of their systems. Recognizing the importance of these developments, OCSE plans to change its approach and issue functional requirements incrementally, and has taken steps to work with the states; however, the impact of welfare reform and associated costs is not yet known. Another demand on systems development will be monitoring and ensuring that new statewide child support enforcement systems, as well as existing systems that interface with the new systems, will process date-sensitive information correctly in the year 2000 and beyond. In August 1996, the Congress enacted the Personal Responsibility and Work Opportunity Reconciliation Act, fundamentally changing the nation’s welfare system into one that requires work in exchange for a 5-year program of assistance; implementing many of its most critical features involves automated systems. The law contains work requirements, a performance bonus that rewards states for moving welfare recipients into jobs, and comprehensive child support enforcement measures. It also provides support for families moving from welfare to work—including increased funding for child care and guaranteed medical coverage. Provisions are also included to improve automation in order to increase paternity establishment, obtain more information on work and residence locations of noncustodial parents, and process child support orders and collections. Both the states and ACF will be required to address these provisions by developing new databases or enhancing existing automated child support systems before October 1, 2000. A $400 million cap has been placed on enhanced federal matching funds through 2001 for development costs of automated systems, and funds are to be allocated to the states on the basis of existing workloads and level of needed automation. Welfare reform further underscores the need for streamlined business processes and automated systems for the child support program. Since welfare reform establishes time limits and eligibility restrictions on individuals in the Temporary Assistance for Needy Families block grant, states are being faced with the need to increase child support collections. According to experts, this likely will force some states to manage child support cases differently and require modifications to existing laws governing child support operations. States will be required to automate many child support operations to more efficiently disclose, exchange, and compare information on noncustodial parents owing delinquent support payments. Some of the more significant state welfare reform systems requirements include establishing, by October 1, 1997, or following the close of the next regular legislative session, a statewide system for tracking paternity orders and acknowledgements of paternity; developing, by October 1, 1997, a new-hire registry on which employers will report information on employees recently hired, with the capability of reporting the information to a national database and issuing wage withholding notices to employers within 2 business days, and making data comparisons with the case registry database by May 1, 1998; developing, by October 1, 1998, a central case registry for all child support cases and support orders established or modified in the state after that date and, as of that date, capable of making data comparisons; and establishing, by October 1, 1998 (or by October 1, 1999, if court administered), a centralized unit to collect and disburse child support payments, and by October 1, 2000, a statewide child support system that meets all requirements. To comply with these mandates, many states will not only have to reassess the way child support cases are managed administratively but electronically as well. This may include developing new databases and electronic links to other public and private organizations, including financial institutions; credit bureaus; the Internal Revenue Service; and state agencies—including judicial, corrections, licensing, business ownership, motor vehicle, labor, vital statistics, and Medicaid. For example, the requirement to develop a registry of paternities will likely result in the development of a new database that interfaces with departments or bureaus responsible for tracking statewide births. However, tracking these data electronically may be a challenge since some states have not automated the departments or bureaus responsible for birth certificates. The states will also have to develop automated interfaces with the national Federal Parent Locator Service and the yet-to-be-developed federal case registry and new-hire registry databases. The Federal Parent Locator Service is an electronic system that cross-matches data to help locate noncustodial parents across state lines through links with other state systems and existing national databases, such as those of the Internal Revenue Service, Social Security Administration, and Department of Labor. Both the new-hire registry and the federal case registry will also need to be designed to receive and compare state data on child support cases and noncustodial parents. The states, in turn, will be required to develop similar databases that electronically interface with these national systems. The federal new-hire and case registry systems must be completed by October 1, 1997, and October 1, 1998, respectively. OCSE has contracted with several vendors to develop the software for these databases and to provide states with technical assistance. Many states will have to adapt their existing systems and change laws governing child support operations to implement many of these systems requirements. For example, three states we visited indicated that existing laws governing the child support program and new-hire reporting requirements for employers would have to be amended or rescinded before mandated systems requirements could be implemented. Other states will have to make welfare reform system changes while finishing work on child support systems mandated by the 1988 act. While OCSE has initiated some steps to identify welfare reform systems issues and plans to change its approach in issuing functional requirements, the agency does not yet know the impact the legislation will have on state systems. As of March 31, 1997, OCSE had not developed functional requirements for implementing welfare reform or fully analyzed the impact of these provisions on existing state child support systems. Guidance for developing the new-hire registries had not been completed, even though these registries are due to be in operation by October 1, 1997. However, OCSE does plan to apply lessons learned from technology projects mandated as part of the 1988 Family Support Act and release technical guidance for systems changes required by welfare reform incrementally. The early release of technical guidance should help states decide on systems requirements as soon as possible, minimizing project delays. With the increased funding being made available through welfare reform, OCSE plans to conduct more on-site reviews of child support systems projects to help identify and prevent costly systems development problems during earlier stages of the projects. The agency has also supported ACF user groups and established electronic information bulletin boards to identify and share information on systems issues. Further, OCSE is participating in welfare reform work groups with the states to discuss policy and systems-related issues. In January 1997, the agency also created a federal, state, and local government initiative to work with the eight largest states—representing almost 50 percent of the child support cases. This initiative focuses on improving program performance, which may include automated systems issues. In addition, the agency recently queried the states to identify technical support needs and planned to issue, in May 1997, a national plan to better address OCSE technical assistance to the states’ systems activities. Despite these early attempts to work with the states, according to OCSE’s information systems director, the agency does not know the impact the welfare reform will have on the states’ child support systems. She noted that until the requirements are defined, the extent of systems changes and their costs are not known. The change in century could have a significant impact on state systems that process date-dependent information related to child support. Ensuring that all state child support enforcement systems adequately address the processing of information that is date-dependent is critical. Correcting noncompliant year-2000 software may be expensive. Among these systems are those that must interface directly and provide information to the newly-developed child support enforcement systems. Many older state systems that will still be in operation in 2000 were programmed using 2 digits to represent the year—such as “97” for 1997. However, in such a format 2000 is indistinguishable from 1900. OCSE has stated that it has informed the states that both the new child support systems and their applications software under development, as well as the existing systems that must still interface with the new statewide systems and their applications software, must be year-2000 compliant. Thanks to technology, many states are better able to locate noncustodial parents who owe child support payments, seize government tax refunds or benefits, and issue child support payments to families more efficiently. But this progress has been expensive. The cost of developing state child support enforcement systems has risen over the past 15 years to the point that it now exceeds $2.6 billion, of which $2 billion is federal funds. The amount remaining to be spent to bring all states into full legal compliance is unknown. At the time of our review, most states did not yet have fully functional child support enforcement systems. Aside from new requirements resulting from welfare legislation, only 12 states had federally certified child support enforcement systems; as many as 14 states—responsible for about 44 percent of the national caseload—may well miss the October 1, 1997, deadline for completing their automated systems. The causes are widespread. States have underestimated the magnitude, complexity, and costs of their projects and operations, and they could have received better guidance and assistance from the federal government, specifically OCSE. The lack of progress in the development of state child support systems also can be partly attributed to the agency’s limited leadership and oversight and some states’ inadequate systems approaches. OCSE’s release of final functional requirements for the state systems was late, which encouraged some states to automate many tasks without adequate requirements management or control. Though ready, some states hesitated to make their systems’ requirements final; it must be remembered that deadlines loomed, with or without final requirements. Another factor was OCSE’s mandated transfer policy, which was premature and poorly implemented. This alone caused long-term problems, increased costs, and delays. Against this backdrop, which included the failure to fully implement recommendations we made some 5 years ago, OCSE allowed state systems with serious problems to proceed, thus escalating spending with no assurance that effective, efficient systems would result—and many indicators to the contrary. Specifically, OCSE did not establish levels of oversight and technical review commensurate with the size and complexity of this nationwide undertaking. It did not require states to follow a structured systems development approach; nor did OCSE assess progress at critical decision points, thereby missing opportunities to intervene and successfully redirect systems development. OCSE relied on required annual planning documents, which were optimistic projections and for many states did not relate to the critical phases of system development. Certifications were narrowly focused and conducted only at a state’s request, when the state was ready. While OCSE has supported state-to-state interaction with users’ groups, the agency itself cannot develop a truly nationwide perspective without an understanding of the trends that typify development of individual state systems. Lacking this knowledge, OCSE cannot disseminate valuable information to states in earlier stages of development. During the last 5 years, when much of the money has been spent and when it was most critical for OCSE to take a leadership role and evaluate states’ efforts, agency and HHS regional officials noted that their oversight was hindered by limited technical expertise and resources. Critical areas of systems expertise—systems development, systems engineering, and program management—are essential to assess how effectively systems are being implemented. Because of the magnitude of the caseload, the funds being provided, and the importance of the program’s mission, it is essential that both federal and state officials take responsibility for developing effective and efficient automated child support systems. While evaluating states’ efforts is one major component of OCSE’s role, it is important that the agency considers itself a stakeholder in these efforts. The problem appears to stem from OCSE’s view of its role—one of merely monitoring requirements and approving funds rather than being held accountable for effective systems development approaches. With the enactment of welfare reform, OCSE’s role becomes much more important: for those who may no longer be eligible for welfare benefits and rely solely on child support, the effectiveness of their state’s system will be critical. Effective, strong federal leadership will be necessary if we are determined to support those who rely on these automated systems. We are making several recommendations to increase the likelihood of developing state automated child support systems that will perform as required. To maximize the federal government’s return on costly technology investments, we recommend that the Secretary of Health and Human Services direct and ensure that the Assistant Secretary of the Administration for Children and Families take the following actions. Develop and implement a structured approach to reviewing automation projects to ensure that significant systems development milestones are identified and that the costs of project decisions are justified during the entire effort. We recommend each major systems phase be reviewed and, at critical points—analysis, design, coding, testing, conversion, and acceptance—that OCSE, according to preestablished criteria, formally report to the state whether it considers the state ready to proceed to the next milestone or phase. Develop a mechanism for verifying that states follow generally accepted systems development practices to minimize project risks and costly errors. OCSE should revise the guidance for the APDs and APDUs to ensure that these documents provide the information needed to assess different phases of development and are consistent from year to year. This information should include clearly defined requirements; schedules reflecting the amount of data converted, code written, modules produced, and the results of testing; and other measures to quantify progress. Use an evaluative approach for planned and ongoing state information technology projects that focuses on expected and actual cost, benefits, and risks. OCSE should require states to implement needed corrective actions for federally funded systems when problems and major discrepancies in cost and benefits are first identified. If a state experiences delays and problems and is not following generally accepted systems development practices, OCSE should suspend funding until the state redirects its approach. Evaluate current staff systems knowledge, skills, and abilities and identify what additional technical expertise is needed. Develop the technical skills needed to allow OCSE to become more actively involved with the states at critical points in their development processes, and enhance the skills of existing systems reviewers through additional training. This expertise should include program management, software development, and systems engineering. Conduct timely post-implementation reviews on certified child support systems to determine whether they are providing expected benefits, identify any lessons learned, and assess innovative technical solutions. At least annually assess the progress of child support systems projects nationwide to gain and share with the states a broader perspective on costs, systemic problems, potential solutions, and innovative approaches. Information should be shared with other states to help reduce costs and improve effectiveness of the child support program nationally—especially any practices or systems that could benefit states attempting to develop or implement welfare reform systems requirements. Assess the impact of welfare reform on existing child support programs—including automated systems and business operations—and determine whether states will be able to implement systems requirements within established time frames and without exceeding the $400 million cap. This assessment should also include an estimate of additional regular rate funding for automated systems that states may need to comply with the requirements of welfare reform. Provide the states with technical requirements for implementing welfare reform systems, including the new-hire, central case, centralized collection, and disbursement registries in enough time to allow the states to meet the legislatively mandated deadlines of October 1997, 1998, and ultimately 2000. HHS disagreed with our recommendations on agency monitoring and oversight and suspending federal funding for flawed state systems, while generally agreeing with our other recommendations. HHS officials’ primary concern with the report was the degree of federal stewardship appropriate in the effective development of automated state child support enforcement systems. The Department notes that we have a different perception of the appropriate federal role in state automated systems development than is authorized. The Department indicated that reviewing state systems at critical phases would increase the administrative burden on the states, and result in OCSE’s “micromanagement” of state projects. Further, officials reiterated their belief that withholding funds was counterproductive to developing automated systems. Finally, HHS expressed concern about our presentation of the level of state automation and systems costs. The Department did, however, generally agree with our recommendations regarding assessing OCSE’s technical resources, conducting post-implementation and nationwide systems reviews, and defining—in a timely manner—welfare reform requirements. We have reviewed HHS’ comments, and found no reason to change our conclusions and recommendations. HHS views OCSE’s role narrowly, as one of monitoring requirements and approving funds. The agency believes its role is to assist states in meeting mandated deadlines, rather than more actively monitoring and overseeing state systems development activities with an eye towards helpful intervention. We disagree with HHS’ approach. Instead of placing the responsibility solely with the states, OCSE is also accountable for effective state systems development. According to statutory requirements, the agency should review, assess, and inspect systems throughout development. Given the significance of state child support enforcement systems to the operation of the program and the magnitude of expenditures, it is critical that these systems be developed correctly and efficiently from the beginning. A philosophy of providing funds, despite serious systems problems and costly mistakes, misses the opportunity to reduce the risk of systems failure and save taxpayer dollars. OCSE does not evaluate or assess states’ systems development projects using a disciplined, structured approach. The agency’s reviews are narrowly focused and, as a result, not effective or timely in assessing the states’ systems approaches and progress. A summary of the Department’s comments and our evaluation is provided below. HHS’ comments are reprinted in appendix IV of this report. We do not agree with HHS’ position that OCSE’s role be primarily focused on providing technical assistance and guidance. HHS’ statutory responsibilities, as set forth in the Social Security Act, delineate a leadership role in developing child support enforcement systems. Section 452(a) of this act provides that a “designee of the Secretary” (Office of Child Support Enforcement) shall, . . . “review and approve state plans” for child support enforcement programs, “establish standards” for state programs, “review and approve state plans,” and “evaluate the implementation of state programs.” With regard to child support management information systems, Section 452(d) provides that OCSE shall, “on a continuing basis, review, assess, and inspect the planning, design, and operation of management information systems . . . with a view to determining whether, and to what extent, such systems meet and continue to meet requirements imposed .” The agency’s advance planning document (APD) guide, also refers to its leadership role in approving, monitoring, and certifying state systems programs to ensure that federal expenditures are made wisely. In HHS’ response to our report, it noted elsewhere that the agency “has the authority, which it frequently exercises, to require states to send an as-needed APD at critical milestones in its life cycle methodology.” And in response to comments from states concerning the extent of OCSE’s reviews, HHS said that it intends to continue monitoring state systems projects, noting that it has “responsibilities for assuring that the expenditure of federal funds on state systems is necessary for the effective and efficient operation of the programs.” This is consistent with recent legislation that requires more effective oversight of systems development activities. The Clinger-Cohen Act of 1996 and recent Office of Management and Budget guidance for managing information technology investments specify the need for greater accountability for systems during the critical development and implementation phases. The federal government is a major stakeholder in these systems, paying about $2 billion dollars over the last 15 years. As such, it is critical that OCSE’s current approach to monitoring and overseeing state systems be improved to ensure that the federal government’s investment in systems is spent wisely. Our recommendations reflect systems development practices that are widely used in both the private and public sectors. Monitoring at critical points in the development process allows earlier intervention and greater opportunity to correct problems before they become more costly. We disagree that this approach constitutes micromanagement; we believe that to do less constitutes lax management. These activities also need not create an administrative burden. OCSE does not have to impose additional reporting requirements on the states; it must simply streamline its existing reporting process to ensure the inclusion of key pieces of information at critical phases. It is imperative that HHS take advantage of its legislatively authorized oversight and monitoring role. In the absence of such action, HHS is likely to continue to provide little added value to states; instead, it will remain merely a bureaucratic hurdle for states to climb to fund critically important systems. HHS asserts that OCSE should provide technical assistance to the states, rather than suspend federal funding—especially given the statutory deadline. We disagree. Federal regulations provide for the suspension of federal funding when states’ systems under development cease to substantially comply with requirements and other provisions of the APD.Further, irrespective of the deadline, allowing systems to be developed ineffectively and inefficiently at the expense of the taxpayers is not supporting the goals and underlying intent of the legislation. Allowing a state to go forward before correcting inadequacies in approach contributes to rising systems’ costs. In this report and in our 1992 report, we pointed out that OCSE continues to fund systems with serious problems—problems that threaten their very success. As discussed in this report, such an approach invites the need to correct serious problems later in the development process, when it is more costly and time-consuming to do so. OCSE has periodically suspended state funding for automation projects. As we reported, however, almost 60 percent of these disruptions were due to insufficient information on the required APD, or for states’ exceeding their authorized funding levels—not for more substantive issues on the soundness of the development approach itself. Even when funding was held up for major systems-related problems, efforts to correct these problems did not appear to be made in a timely fashion. Further, in cases in which an HHS regional official suggested that OCSE hold up funding for a project, the agency did not stop funding until the project “crashed.” HHS agreed that the states are encountering automation problems and that states need a greater degree of oversight. The Department stated that OCSE follows a structured approach in reviewing states APD submissions. We believe this approach is not sufficient oversight. While we described the agency’s review process—APD and certification reviews, we believe that the APD process should ensure that systems development activities at critical decision points are evaluated. OCSE does not consistently monitor state systems development at critical milestone points, such as the completion of design or requirements development. We noted that OCSE’s certification reviews are usually conducted toward the end of the development process, and as such are often too late to help identify problems and redirect the approach. Another issue raised by HHS was that with a variety of concurrent systems development activities, including the staggering welfare reform deadlines, it may be difficult to use our suggested “one structured methodology fits all.” We are certainly not advocating that OCSE require or impose a “one structured methodology fits all” approach. A structured approach to reviewing systems development—irrespective of the particular methodology used—would also allow for systems variability, including the differences in project size, scope, and complexity. The key to a structured approach is the identification of critical milestones that are the basis for systems reviews. States value this process; one state official noted that its project would be unmanageable without it. Other state officials told us that OCSE should play a more active role, and that they considered it important that the agency assess the management and direction of the project early to avoid or minimize later problems. HHS concurred with our recommendation to conduct post-implementation reviews on certified child support systems. The agency has plans in fiscal year 1998 to conduct both technical assistance visits and post-implementation reviews to further identify lessons learned and assess innovative technical solutions. HHS agreed on the importance of a nationwide assessment of child support systems projects and has taken steps in that direction. The Department noted that it has developed the State System Approval Information System (SSAIS) and uses electronic means for information sharing. However, it is critical that OCSE continue to maintain and use accurate information from the SSAIS and develop a sound nationwide basis for encouraging states to share innovative database designs, software, and other technologies for greater efficiencies and costs savings, and for identifying recurring problems. With a systematic and comparison-based assessment, OCSE could recognize trends and identify best practices that could be shared. Identifying and taking action on such issues would be a significant benefit of increased oversight. HHS agreed on the importance of providing states with technical requirements associated with recent welfare reform legislation. As HHS noted, states’ systems funding was not limited to the $400 million enhanced funding. We recognize that the welfare reform requirements are substantial, and that the legislation will allow states to be reimbursed at the regular and enhanced rate. The monetary magnitude of accommodating welfare reform systems requirements further underscores the importance of effective federal oversight, including comprehensive assessments of systems implications and timely issuance of systems requirements. HHS believes our report incorrectly noted that noncertified states have “no automation” to enforce child support collections. We acknowledge that state systems that are not yet certified may have some automation. In fact, we noted that state officials indicated that partially automated systems have improved their capability to locate noncustodial parents, increased paternity establishment and collections, and provided greater staff efficiency. We also noted, however, that OCSE officials said that as many as 14 states may not meet the deadline for certification, leaving about 44 percent of the national caseload without the full benefits of automation. As envisioned by the Congress in implementing this legislation, some states have attained benefits; however, child support enforcement systems costs continue to increase, and the extent of final costs is not yet known. We acknowledge that systems’ costs include developing new systems and maintaining and enhancing systems that were certified prior to 1988. OCSE, however, does not track these costs separately. Some states did not design new systems; rather, they built upon existing ones. In these cases, the costs attributable to the 1988 act would be less than those of states that developed entirely new systems. However, states that updated their existing systems may now, as the October 1, 1997, deadline draws near, need to significantly redesign their systems to fully meet child support certification requirements and support welfare reform legislation. HHS indicated that the costs of automation should be carefully placed in perspective and also compared automation costs to the agency’s administrative costs. We recognize that systems costs may be a small percentage of the total administrative costs; however, we do not believe that $2.6 billion is inconsequential. We recognize and cite examples where OCSE’s weak oversight contributed to the rising costs that could have been avoided if more of a proactive leadership role was demonstrated by the agency. These systems will play a critical role in effectively administering the child support and welfare programs in the future. As such, it is incumbent upon the Department and OCSE to ensure that the dollars invested in these systems are spent wisely, and provide an effective return on investment. | Pursuant to a congressional request, GAO updated its 1992 report on child support enforcement, focusing on: (1) the status of state development efforts, including costs incurred; (2) whether the Department of Health and Human Services (HHS) had implemented GAO's 1992 recommendations; and (3) whether the Department was providing effective federal oversight of state systems development activities. GAO noted that: (1) it is too early to judge the potential of fully developed automated systems, yet bringing the benefits of automation to bear on child support enforcement appears to have played a major role in locating more noncustodial parents and increasing collections; (2) according to HHS, in fiscal year (FY) 1995, almost $11 billion was collected, 80 percent higher than the amount collected in 1990; (3) while automated state child support systems are being developed, many may not be certified by the October 1, 1997, deadline; (4) furthermore, states have underestimated the magnitude, complexity, and costs of their systems projects; (5) systems development costs for FY 1995 alone were just under $600 million, and over $2.6 billion has been spent since 1980 for county and statewide systems development; (6) GAO's 1992 report discussed significant problems in federal oversight and monitoring of state activity, and made three recommendations; (7) however, only one has been completely implemented; (8) the Office of Child Support Enforcement (OCSE) now works with its audit division to identify and resolve systems problems; (9) GAO's recommendations to suspend federal funding when major problems exist and to require states to initiate corrective actions when problems are first identified were only partially addressed; (10) OCSE's oversight of state child support systems has been narrowly focused and, as a result, not effective or timely in assessing the states' systems approaches and progress; (11) OCSE believes it lacks the technical expertise and resources to be involved at critical points in the systems development process; (12) OCSE's role has been primarily limited to document review and after-the-fact certification when the states request an inspection of completed systems; (13) therefore, OCSE has allowed some funds to be spent without ensuring that states were progressing toward effective or efficient systems; (14) while OCSE has shared some lessons learned, its oversight has operated on a state-by-state basis; (15) lacking this nationwide perspective has hindered the agency's ability to provide proactive leadership to the states; (16) as added systems functional requirements of the newly enacted welfare reform legislation come into play, it will be increasingly important that child support enforcement systems work as envisioned and that OCSE monitor progress on a broader scale; and (17) many recipients may find that they no longer qualify for welfare benefits, with child support being their only remaining income. |
AHRQ and the Office for Civil Rights (OCR) within HHS share responsibility for implementing the Patient Safety Act. AHRQ is responsible for listing PSOs, providing technical assistance to PSOs, implementing and maintaining the NPSD, and analyzing the data submitted to the NPSD. OCR has responsibility for interpreting, implementing, and enforcing the confidentiality protections. To help implement the Patient Safety Act, AHRQ and OCR developed the legislation’s implementing regulations, which took effect January 19, 2009. The Patient Safety Act establishes criteria that organizations must meet and required patient safety activities that the organizations must perform after being listed as PSOs. The criteria include an organizational mission to improve patient safety and the quality of health care delivery; use of collected data to provide direct feedback and assistance to providers to minimize patient risk; staff who are qualified to perform analyses on patient safety data; and adequate policies and procedures to ensure that patient safety data are kept confidential. Required PSO activities include activities such as efforts to improve patient safety and the quality of health care delivery. (See app. II for the complete list of criteria and required PSO activities as specified in the Patient Safety Act.) The criteria allow for many types of organizations to apply to AHRQ to be listed as a PSO. These organizations may include public and private entities, for-profit and not- for-profit organizations, and entities that are a component of another organization, such as a hospital association or health system. A PSO must attest for the initial listing period that it will comply with the criteria and that it has policies and procedures in place that will allow it to perform the required activities of a PSO. When reapplying for subsequent 3-year listing periods, a PSO must attest that it is complying with the criteria and that it is in fact performing each of the required activities. The regulations require AHRQ staff to review written PSO applications documenting PSO attestations to each of the statutory criteria and required activities. In the case of certain PSOs that are component organizations, the regulations also require the applicant to complete an additional set of attestations and disclosure statements detailing the relationship between the component and parent organizations. The regulations require that after AHRQ staff review the application materials and related information, the applicant will be listed, conditionally listed, or denied. When a provider elects to use the services of a listed PSO, the Patient Safety Act provides privilege and confidentiality protections for certain types of data regarding patient safety events that providers collect for the purposes of reporting to a PSO. In general, the Patient Safety Act excludes the use of patient safety data in civil suits, such as those involving malpractice claims, and in disciplinary proceedings against a provider. While certain states have laws providing varying levels of privilege and confidentiality protections for patient safety data, the Patient Safety Act provides a minimum level of protection. Regulations implementing the Patient Safety Act address the circumstances under which patient safety data may be disclosed, such as when used in criminal proceedings, authorized by identified providers, and among PSOs or affiliated providers. OCR has the authority to conduct reviews to ensure that PSOs, providers, and other entities are complying with the confidentiality protections provided by the law. OCR also has the authority to investigate complaints alleging that patient safety data has been improperly disclosed and to impose a civil money penalty of up to $11,000 per violation. The Patient Safety Act requires HHS to create and maintain the NPSD as a resource for PSOs, providers, and qualified researchers. The law specifies that the NPSD must have the capacity to accept, aggregate, and analyze non-identifiable patient safety data voluntarily submitted to the NPSD by PSOs, providers, and other entities. Providers may submit non- identifiable data directly to the NPSD, or work with a PSO to submit patient safety data. Neither PSOs nor providers are required by either the Patient Safety Act or regulation to submit data to the NPSD. Figure 1 shows the intended flow of patient safety data and other information among providers, PSOs, and the NPSD. The Patient Safety Act authorizes HHS to develop common formats for reporting patient safety data to the NPSD. According to the Patient Safety Act, these formats may include the necessary data elements to be collected and provide common and consistent definitions and a standardized computer interface for processing the data. While most U.S. hospitals have some type of internal reporting system for collecting data on patient safety events, they often have varying ways of collecting and organizing their data. This variation makes it difficult to accurately compare patient safety events across systems and providers and can be a barrier to developing solutions to improve patient safety. If providers or PSOs choose to submit patient safety data to the NPSD, AHRQ requires that these data be submitted using the common formats, because using the common formats is necessary so that data in the NPSD can be aggregated and analyzed. Aggregation and analysis of data is important for developing the “lessons learned” or “best practices” across different institutions that may help improve patient safety. The Patient Safety Act and its implementing regulations provide additional measures PSOs must follow whether or not they intend to submit the data they collect to the NPSD. The Patient Safety Act regulations require PSOs to collect patient safety data from providers in a standardized manner that permits valid comparisons of similar cases among similar providers, to the extent to which these measures are practical and appropriate. To meet this requirement, the regulation specifies that PSOs must either (1) use the common formats developed by AHRQ when collecting patient safety data from providers, (2) utilize an alternative format that permits valid comparisons among providers, or (3) explain to AHRQ why it would not be practical or appropriate to do so. The Patient Safety Act also requires any data regarding patient safety events that is submitted to the NPSD be non-identifiable. According to the Patient Safety Act, users can access non-identifiable patient safety data only in accordance with the confidentiality protections established by the Patient Safety Act. The Patient Safety Act’s regulations provide technical specifications for making patient safety data non-identifiable. Finally, the Patient Safety Act states AHRQ must analyze the data that are submitted to the NPSD and include these analyses in publicly available reports. Specifically, under the Patient Safety Act, AHRQ is required to submit a draft report on strategies to improve patient safety to the IOM within 18 months of the NPSD becoming operational and a final report to Congress 1 year later. The Patient Safety Act requires this report to include effective strategies for reducing medical errors and increasing patient safety, as well any measures AHRQ determines are appropriate to encourage providers to use the strategies, including use in any federally funded programs. In addition, the Patient Safety Act states HHS must use data in the NPSD to analyze national and regional statistics, including trends and patterns of health care errors, and include any information resulting from such analyses in its annual reports on health care quality. AHRQ listed 65 PSOs from November 2008 to July 2009. However, few of the 17 PSOs we randomly selected to interview had entered into contracts or other business agreements with providers to serve as their PSO, and only 3 PSOs reported having begun receiving patient safety data or providing feedback to providers. PSO officials identified several reasons why they have not yet engaged with providers. Some PSOs are still establishing various aspects of their operations; some are waiting for the common formats for collecting patient safety data to be finalized by AHRQ; and some are still engaged in marketing their services and educating providers about the federal confidentiality protections offered by the Patient Safety Act. Although the regulations implementing the Patient Safety Act did not become effective until January 19, 2009, AHRQ began listing PSOs earlier, in November 2008. By July 2009, AHRQ had listed 65 PSOs in 26 states and the District of Columbia. AHRQ officials told us that in listing PSOs they accepted PSOs’ attestations that the PSOs met the certification requirements established in the Patient Safety Act—that is, to be a listed PSO, an entity must have policies and procedures in place to perform the required activities of a PSO and will comply with additional criteria for listing. For continued listing beyond the initial period, PSOs must attest that they have contracts with more than one provider and are in fact performing each of the required activities. The 65 PSOs AHRQ had listed represent a wide range of organizations, including some that provided patient safety services for many years prior to being listed as well as new organizations specifically established to function as a PSO under the Patient Safety Act. AHRQ officials told us that the organizations listed as PSOs include consulting firms that have provided patient safety services for a range of providers and specialties, as well as organizations with a focus on patient safety in a specific area such as medical devices, hand hygiene, or pediatric anesthesia. The listed PSOs also include vendors of patient safety reporting software and components of state hospital associations. AHRQ officials told us that the services PSOs deliver to individual providers will likely vary, depending on the specific contractual or other business agreements the PSOs establish with providers. For example, a small hospital may want to contract with a PSO to provide all its internal quality improvement services, while a large hospital may just contract with a PSO to obtain the legal protections under the Patient Safety Act and to contribute data to the NPSD. While officials of 13 of the 17 PSOs we interviewed indicated they provided some patient safety services prior to being listed, all 17 PSOs stated that the services they planned to make available included the collection and analysis of patient safety data, the de- identification of patient safety data for submission to the NPSD, feedback, and patient safety training. While AHRQ has listed 65 PSOs, few PSOs we interviewed have entered into contracts or other business agreements with providers to serve as their PSO. Only 4 of the 17 listed PSOs we interviewed had any contracts or other agreements with providers to serve as their PSO. Furthermore, according to PSO officials, only 3 of these PSOs had begun to receive patient safety data or provide feedback to providers. PSO officials identified several reasons why they had yet to begin working with providers and receiving patient safety data as of July 2009. These reasons include the following: The need to complete the development of various components of their business operations. Some PSO officials we interviewed told us they still need to determine various components of their operations. For example, officials from some PSOs told us they have yet to determine their fee structure for working with providers. Officials from 6 of 17 PSOs we interviewed stated they were or would be contracting with other PSOs to receive services, such as information technology systems support or data security. Nine PSOs reported they had not yet determined whether they would be contracting for some services. In addition, while officials from most of the PSOs we interviewed indicated they planned to submit patient safety data to the NPSD, 4 had not yet determined how they will make data non-identifiable before sending it to the NPSD. The need to obtain AHRQ’s final common formats for collecting data on patient safety events. Officials from some PSOs we interviewed indicated they needed the common formats to be finalized by AHRQ before beginning to work with providers. While use of AHRQ’s common formats to collect data from providers is not required under the regulations, most PSOs we interviewed plan to use the common formats for collecting data on patient safety events and submitting these data to the NPSD. Officials from 7 of the 17 PSOs we interviewed said they plan to require providers to submit data using the common formats, and 4 PSOs said they will not require them of providers but will either convert the reports they receive to the common formats or adapt their existing reporting system to include the common formats. The need to educate providers about the federal confidentiality protections. Officials from several of the 17 PSOs we interviewed told us they faced challenges in addressing provider concerns related to the scope of the confidentiality protections and that these concerns needed to be addressed before providers would be willing to engage the services of a PSO. Some of these PSO officials described challenges in communicating details of the confidentiality protections. According to AHRQ officials, the rules for when, where, and how patient safety data are protected from disclosure are both complex and interrelated with the privacy rules for protected health information under HIPAA. AHRQ officials acknowledged the need to work with PSOs to clarify the rules governing the confidentiality of patient safety data so PSOs can better communicate these to providers. AHRQ officials indicated they would address these issues in upcoming quarterly conference calls they hold with PSO representatives. (See appendix I for examples of ways established patient safety reporting systems communicate legal protections for providers and the data they submit.) AHRQ is in the process of implementing the NPSD and developing its associated components that are necessary before the NPSD can receive patient safety data—(1) the common formats PSOs and providers will be required to use if submitting patient safety data to the NPSD and (2) a method for making these data non-identifiable. If each of these components is completed on schedule, AHRQ officials expect that the NPSD could begin receiving patient safety data from hospitals in February 2011. AHRQ officials could not provide a time frame for when they expect the NPSD to be able to receive patient safety data from other providers. AHRQ also has preliminary plans for how to allow the NPSD to serve as an interactive resource for providers and PSOs and for how AHRQ will analyze NPSD data to help meet its reporting requirements under the Patient Safety Act. AHRQ is in the process of developing the NPSD, and AHRQ officials expect that the NPSD could begin receiving patient safety data from hospitals by February 2011. Specifically, AHRQ established a 3-year contract with Westat effective September of 2007 to develop the NPSD, which is being set up as a database that AHRQ officials stated is essential for meeting the requirements of the act. AHRQ and Westat officials told us that completion of the NPSD depends on both the development of the common formats that will be used to submit patient safety data to the NPSD and on the development of a method for making the data non- identifiable. If each of these components is completed on schedule, AHRQ officials expect that the NPSD could begin to receive patient safety data from hospitals by February 2011. AHRQ is finalizing the common formats that PSOs and hospitals will be required to use if submitting patient safety data to the NPSD. AHRQ officials expect that the common formats could be available for hospitals to use in submitting data electronically to the NPSD by September 2010. AHRQ began developing the common formats for hospitals in 2005 by reviewing the data collection methods of existing patient safety systems. In 2007, AHRQ contracted with the National Quality Forum (NQF) to assist with the collection and assessment of public comments on a preliminary version of the common formats that was released in August 2008. These common format forms are used to collect information on patient safety events, including information about when and where an event occurred, a description of the event, and patient demographic information. AHRQ issued the common formats for hospitals in paper form in September 2009, and is in the process of making electronic versions available for hospitals and PSOs to use when submitting data to the NPSD. Specifically, AHRQ officials told us that they are in the process of developing technical specifications that private software companies and others can use to develop electronic versions of the common formats. According to AHRQ officials, hospitals and PSOs will need these electronic versions of the common formats in order to submit data to the NPSD. Their current project plan indicates that the technical specifications will be completed by March 2010. AHRQ officials estimate that electronic versions of the common formats could be available to hospitals and PSOs by September 2010. AHRQ officials stated that they expect eventually to develop common formats for providers in other health care settings, such as nursing homes and ambulatory surgical centers. Furthermore, AHRQ officials told us that that they plan on developing future versions of the common formats capable of collecting data from the results of root cause analyses that providers may conduct. However, AHRQ officials were unable to provide an estimate for when the common formats for other providers will be available or when the capability to collect information from root cause analyses will be available. The Patient Safety Act also requires that data submitted to the NPSD be made non-identifiable by removing information that could be used to identify individual patients, providers, or facilities. To help PSOs and providers meet this requirement, AHRQ contracted with the Iowa Foundation for Medical Care (IFMC) to operate a PSO Privacy Protection Center (PPC) that will develop a method for making patient safety data non-identifiable and assist PSOs and providers by removing any identifiable patient or provider information from the data before submission to the NPSD. Current AHRQ and PPC project plans indicate that the PPC should be ready to receive and make patient safety data non- identifiable beginning in September 2010. AHRQ officials told us that this process involves not only removing information from each record that could be used to identify patients, providers, or reporters of patient safety information, but also determining whether identities could be determined from other available information and using appropriate methods to prevent this type of identification from occurring. AHRQ officials told us that PPC officials are working with experts to develop the PPC’s method for making data non-identifiable. AHRQ officials stated that their rationale for establishing the PPC was to determine a method for making data non-identifiable, provide a cost savings for PSOs, encourage data submission to the NPSD, and create consistency in the non-identifiable data that are submitted to the NPSD. According to AHRQ officials, the PPC will provide its services to PSOs at no charge and will submit non-identifiable patient safety data on behalf of PSOs to the NPSD. However, PSOs are not required to use the PPC and may choose to make their patient safety data non-identifiable internally or with the help of a contractor of their choice. AHRQ project plans indicate that the PPC will be able to submit data to the NPSD beginning in February 2011, approximately 5 months after the PPC begins receiving data from hospitals. AHRQ officials stated that this time period is necessary, in part, because the PPC needs to begin receiving data before it can determine if its method for rendering data non- identifiable is appropriate or needs to be adjusted. For example, if the PPC receives a sufficient volume of data, then officials expect to be able to submit data on individual patient safety events and have it remain non- identifiable. If the volume of data is too low, however, PPC officials expect to have to aggregate data from individual events so that it remains non- identifiable once submitted to the NPSD, in which case AHRQ officials stated they may delay submission of data to the NPSD until a sufficient volume is received. AHRQ officials noted that it is impossible to determine in advance the volume of data that will be submitted to the PPC due to the voluntary nature of submissions. As a result, the level of detail that will exist in the NPSD data cannot be determined in advance of data being received and processed by the PPC. Figure 2 summarizes key dates in AHRQ’s efforts to develop the NPSD and its related components. The Patient Safety Act requires that the NPSD serve as an interactive resource for providers and PSOs, allowing them to conduct their own analyses of patient safety data. To meet this requirement, AHRQ has developed plans to allow providers to query the NPSD to obtain information on patient safety events, including information on the frequencies and trends of such events. AHRQ’s contract with Westat to construct the NPSD includes a series of tasks for developing, testing, and implementing this interactive capability of the NPSD. The contract specifies that these interactive capabilities will be available within 12 months of the NPSD beginning to receive patient safety information. Based on AHRQ’s estimate that the NPSD may be operational by February 2011, the interactive capabilities of the NPSD could be available by February 2012. However, AHRQ officials indicated that they had not yet determined the specific types of information that will be available to PSOs and providers as this will depend, in part, on the level of detail that is included in the NPSD data after the data are made non-identifiable. The Patient Safety Act also states that HHS must use the information reported into the NPSD to analyze national and regional statistics, including trends and patterns of health care errors, and to identify and issue reports on strategies for reducing medical errors and increasing patient safety after the NPSD becomes operational. To do this, AHRQ has developed preliminary plans for analyzing the data that will be submitted to the NPSD. According to AHRQ officials, these plans specify how the agency will analyze NPSD data to determine trends and patterns, such as the frequency with which certain types of adverse events happen across providers based on the data they may submit to the NPSD. However, AHRQ has yet to develop plans for more detailed analyses of NPSD data that could be useful for identifying strategies to reduce medical errors. Officials explained that these plans will not be developed until the NPSD begins receiving data and they are able to determine the level of detail in the data and what analyses it will support. Despite the potential for standardization provided by the common formats, AHRQ officials have identified important limitations in the types of analyses that can be performed with the data submitted to the NPSD. For example, AHRQ officials explained that because submissions to the NPSD are voluntary, the trends and patterns produced from the NPSD will not be nationally representative and, therefore, any analyses conducted cannot be used to generate data that are generalizable to the entire U.S. population. In addition, officials stated that the results from some analyses may be unreliable because there is no way to control for duplicate entries into the NPSD, which could occur if a provider submits a single patient safety event report to more than one PSO. Finally, AHRQ officials noted that it will be difficult to determine the prevalence or incidence of adverse events in specific populations. They told us that determining prevalence or incidence rates requires information on the total number of people at risk for such events, and that the patient safety data submitted to the NPSD will not include this information. (See appendix I for more information about the ways established patient safety reporting systems analyze data to develop solutions that improve patient safety.) AHRQ is still in the early stages of listing PSOs and developing plans for how it will analyze NPSD data and report on effective strategies for improving patient safety, as required under the Patient Safety Act. As a result, we cannot assess whether, or to what extent, the law has been effective in encouraging providers to voluntarily report data on patient safety events and to facilitate the development and adoption of improvements in patient safety. In addition, because improvements to patient safety depend on the voluntary participation of providers and PSOs, it remains uncertain whether the goals of the Patient Safety Act will be accomplished even after AHRQ completes its implementation. For example, providers will have to decide whether to work with a PSO and the extent to which they will report patient safety data to both the PSO and the NPSD. Whether the process results in specific recommendations for improving patient safety will depend on the volume and quality of the data submitted and on the quality of the analyses conducted by both PSOs and by AHRQ. Finally, if these recommendations are to lead to patient safety improvements, providers must recognize their value and take actions to implement them. The Department of Health and Human Services reviewed a draft of this report and provided technical comments, which we have incorporated as appropriate. We will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or kohnl@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. Because the Agency for Healthcare Research and Quality’s (AHRQ) efforts to list Patient Safety Organizations and implement the Network of Patient Safety Databases are relatively new but some other patient safety reporting systems are already established, we identified examples of how selected established patient safety reporting systems encourage reporting of patient safety event information by providers and facilitate the development of improvements in patient safety. We judgmentally selected five established patient safety reporting systems from a list of such systems compiled by AHRQ. We selected systems that collected data for learning purposes and that appeared in a literature review we conducted of 45 relevant articles in peer-reviewed, trade, or scholarly publications published since January 2000. After selecting the systems, we conducted structured interviews with representatives of these systems to identify examples of ways that these systems encouraged providers to submit patient safety data for analysis and used the data collected by their systems to help develop improvements in patient safety. The system representatives we interviewed provided common examples that we have grouped into four areas: Practices that encourage providers to learn from patient safety data, rather Communication intended to clearly explain legal protections for providers and the data they submit; Data collection tools intended to standardize the data providers submit; Data analyses that produce actionable feedback. Practices that encourage providers to learn from patient safety data, rather than blame individuals. Representatives from all five patient safety reporting systems we reviewed said their systems encourage providers to learn from patient safety data as a way to improve patient safety, and not blame individuals for an event. According to system representatives, one way they did this was to emphasize the value of the data collected by the system for learning ways to reduce the risk that a certain event will recur. For example, representatives from one system said they created posters to hang in health care facilities from which the system collected patient safety data. Representatives from this system explained that the posters described a patient safety event about which the system received data as well as the solutions the system developed to improve patient safety. Another practice representatives said they used is allowing providers to submit data anonymously. Four out of five system representatives said their systems offered providers a way to submit data anonymously. Communication intended to clearly explain legal protections for providers and the data they submit. Many of the representatives we interviewed from patient safety reporting systems told us that their systems communicate information intended to clearly explain the legal protections afforded providers and the patient safety data they submit. For example, one system in our review provided guidance for providers on how to clearly label data to invoke the confidentiality protections associated with patient safety data under a law that protects data in this system. Representatives from another patient safety reporting system told us that communicating information about available legal protections can be particularly important for systems that collect data from providers in multiple states, because the legal protections for providers and patient safety data vary from state to state. For example, representatives from two patient safety reporting systems with users in multiple states said their systems provided customized legal information for providers based on the state confidentiality laws that applied to each provider’s location. A representative from one of these systems also said that the legal information the system offered helped providers understand what types of data to submit and encouraged them to submit it. Data collection tools intended to standardize the data providers submit. Representatives from all five systems told us they had developed tools intended to standardize the data providers submit to their patient safety databases. For some systems these tools include common formats and computer systems. Some of the representatives explained that standardizing the information providers submit helps ensure that patient safety events, especially events involving clinical terms, are classified in the same way. Some representatives also said that if a system did not define clinical terms for providers, providers may define events differently, which can limit the system’s ability to analyze submitted patient safety data. Furthermore, the representatives said, standardizing terms increased the value of the data as it is aggregated, as well as any resulting analyses. Representatives from all five systems said the ability to collect and aggregate standardized patient safety data allowed them to identify patterns in patient safety events, which they believed enabled their systems to suggest ways to improve patient safety. Some system representatives said that standardizing the way providers submit patient safety data allowed them to streamline the data collection process for providers. Some representatives said they designed their data collection protocols to allow providers to fulfill additional reporting requirements related to accreditation or quality improvement functions, such as submitting data regarding certain patient safety events to the Joint Commission. Representatives from one system said that their systems did this to make collecting and submitting patient safety data more efficient for providers and thereby increase the likelihood that providers would submit such data to the patient safety reporting system. In another example, one system built a feature into its computer program that allowed providers to transfer data directly from providers’ in-house databases to the patient safety data collection system, a data collection method system representatives said accounted for approximately 40 percent of all data received from providers. Data analyses that produce actionable feedback. Representatives from all five patient safety reporting systems told us that their systems analyzed submitted data to develop actionable steps providers could implement to improve patient safety. According to the representatives, their systems aggregated data from provider submissions and used these data for both quantitative analyses, such as trend or frequency analyses, and qualitative analyses, which examine narrative data to determine whether there were any common themes across events. Representatives from all five systems said they used both qualitative and quantitative analyses because neither method alone was completely sufficient to develop improvements to patient safety. For example, one system’s representatives said they conducted qualitative analyses such as using a computer program to analyze and group the narrative data providers submitted to learn about the factors that contributed to patient safety events. The same representatives explained that their system also conducted quantitative analyses such as trend analyses on events to see how often they occur. Representatives from all the systems said they used various methods to encourage providers to implement the improvements to patient safety the systems helped develop. Examples of methods they used included sending an e-mail from the system when new content was published on the system’s Web site, hosting Web conferences, and publishing analyses in trade or scholarly publications. All the representatives said their systems collaborated with other organizations to increase the likelihood that the improvements they developed were implemented. For example, one system worked with a statewide coalition of organizations in the quality improvement field to encourage providers to implement the patient safety improvements the system developed. In addition to the contact named above, William Simerl, Assistant Director; Eric R. Anderson; Eleanor M. Cambridge; Krister Friday; Kevin Milne; and Andrea E. Richardson made key contributions to this report. | The Institute of Medicine (IOM) estimated in 1999 that preventable medical errors cause as many as 98,000 deaths a year among hospital patients in the United States. Congress passed the Patient Safety and Quality Improvement Act of 2005 (the Patient Safety Act) to encourage health care providers to voluntarily report information on medical errors and other events--patient safety data--for analysis and to facilitate the development of improvements in patient safety using these data. The Patient Safety Act directed GAO to report on the law's effectiveness. This report describes progress by the Department of Health and Human Services, Agency for Healthcare Research and Quality (AHRQ) to implement the Patient Safety Act by (1) creating a list of Patient Safety Organizations (PSO) so that these entities are authorized under the Patient Safety Act to collect patient safety data from health care providers to develop improvements in patient safety, and (2) implementing the network of patient safety databases (NPSD) to collect and aggregate patient safety data. These actions are important to complete before the law's effectiveness can be evaluated. To do its work, GAO interviewed AHRQ officials and their contractors. GAO also conducted structured interviews with officials from a randomly selected sample of PSOs. AHRQ has made progress listing 65 PSOs as of July 2009. However, at the time of GAO's review, few of the 17 PSOs randomly selected for interviews had entered into contracts to work with providers or had begun to receive patient safety data. PSO officials told GAO that some PSOs were still establishing aspects of their operations; some were waiting for AHRQ to finalize a standardized way for PSOs to collect data from providers; and some PSOs were still engaged in educating providers about the confidentiality protections offered by the Patient Safety Act. AHRQ is in the process of developing the NPSD and its associated components--(1) the common formats PSOs and providers will be required to use when submitting patient safety data to the NPSD and (2) a method for making patient safety data non-identifiable, or removing all information which could be used to identify a patient, provider, or reporter of patient safety information. If each of these components is completed on schedule, AHRQ officials expect that the NPSD could begin receiving patient safety data from hospitals by February 2011. AHRQ officials could not provide a time frame for when they expect the NPSD to be able to receive patient safety data from other providers. AHRQ also has preliminary plans for how to allow the NPSD to serve as an interactive resource for providers and PSOs and for how AHRQ will analyze NPSD data to help meet certain reporting requirements established by the Patient Safety Act. According to AHRQ officials, plans for more detailed analyses that could be useful for identifying strategies to reduce medical errors will be developed once the NPSD begins to receive data. |
Since 1990, we have regularly reported on government operations that we have identified as high risk due to their vulnerability to fraud, waste, abuse, and mismanagement, or the need for transformation to address economy, efficiency, or effectiveness challenges. Our high-risk program— which is intended to help inform the congressional oversight agenda and to guide efforts of the administration and agencies to improve government performance—has brought much-needed focus to problems impeding effective government and costing billions of dollars. In 1990, we designated 14 high-risk areas. Since then, generally coinciding with the start of each new Congress, we have reported on the status of progress to address previously designated high-risk areas, determined whether any areas could be removed or consolidated, and identified new high-risk areas. Since 1990, a total of 60 different areas have appeared on the High-Risk List, 24 areas have been removed, and 2 areas have been consolidated. On average, high-risk areas that have been removed from the list remained on it for 9 years after they were initially added. Our experience has shown that the key elements needed to make progress in high-risk areas are top-level attention by the administration and agency leaders grounded in the five criteria for removal from the High-Risk List, as well as any needed congressional action. The five criteria for removal that we issued in November 2000 are as follows: Leadership Commitment. The agency demonstrates strong commitment and top leadership support. Capacity. The agency has the capacity (i.e., people and resources) to resolve the risk(s). Action Plan. A corrective action plan exists that defines the root cause and solutions, and provides for substantially completing corrective measures, including steps necessary to implement solutions we recommended. Monitoring. A program has been instituted to monitor and independently validate the effectiveness and sustainability of corrective measures. Demonstrated Progress. The agency is able to demonstrate progress in implementing corrective measures and in resolving the high-risk area. These five criteria form a road map for efforts to improve and ultimately address high-risk issues. Addressing some of the criteria leads to progress, while satisfying all of the criteria is central to removal from the list. In our April 2016 report, we provided additional information on how agencies had made progress addressing high-risk issues. Figure 1 shows the five criteria for removal for a designated high-risk area and examples of actions taken by agencies as cited in that report. Importantly, the actions listed are not “stand alone” efforts taken in isolation from other actions to address high-risk issues. That is, actions taken under one criterion may also be important in meeting other criteria. For example, top leadership can demonstrate its commitment by establishing a corrective action plan including long-term priorities and goals to address the high-risk issue and using data to gauge progress— actions which are also vital to monitoring criteria. VA officials have expressed their commitment to addressing the concerns that led to the high-risk designation for VA health care. As part of our work for the 2017 high-risk report, we identified actions VA had taken, such as establishing a task force, working groups, and a governance structure for addressing the five areas of concern contributing to the designation: (1) ambiguous policies and inconsistent processes; (2) inadequate oversight and accountability; (3) information technology (IT) challenges; (4) inadequate training for VA staff; and (5) unclear resource needs and allocation priorities. For example, in July 2016, VA chartered the GAO High Risk List Area Task Force for Managing Risk and Improving VA Health Care to develop and oversee implementation of VA’s plan to address the root causes of the five areas of concern we identified in 2015. VA’s task force and associated working groups are responsible for developing and executing the department’s high-risk mitigation plan for each of the five areas of concern we identified. VA also executed two contracts with a total value of $7.8 million to support its actions to address the concerns behind the high-risk designation. These contracts—with the MITRE Corporation and Atlas Research, LLC—are intended to provide additional support for actions such as developing and executing an action plan, creating a plan to enhance VA’s capacity to manage the five areas, and assisting with establishing the management functions necessary to oversee the five high-risk-area working groups. On August 18, 2016, VA provided us with an action plan that acknowledged the deep-rooted nature of the areas of concern, and stated that these concerns would require substantial time and work to address. Although the action plan outlined some steps VA plans to take over the next several years to address the concerns that led to its high-risk designation, several sections were missing critical actions that would support our criteria for removal from the High-Risk List, such as analyzing the root causes of the issues and measuring progress with clear metrics. In our feedback to VHA on drafts of its action plan, we highlighted these missing actions and also stressed the need for specific timelines and an assessment of needed resources for implementation. For example, VA plans to use staff from various sources, including contractors and temporarily detailed employees, to support its high-risk-area working groups, so it is important for VA to ensure that these efforts are sufficiently resourced. As we reported in the February 2017 high-risk report, when we applied the five criteria for High-Risk List removal to each of the areas of concern, we determined that VA has partially met two of the five criteria: leadership commitment and an action plan. VA has not met the other three criteria for removal: capacity to address the areas of concern, monitoring implementation of corrective actions, and demonstrating progress. It is worth noting that although both criteria were rated as partially met, the department made significantly less progress in developing a viable action plan than it has in demonstrating leadership commitment. Specifically, VA partially met the action plan criterion for only one of the five areas of concern—ambiguous policies and inconsistent processes—whereas VA partially met the leadership commitment criterion for four out of five areas of concern. The following is a summary of the progress VA has made in addressing the five criteria for removal from the High-Risk List for each of the five areas of concern we identified. Summary of concern. When we designated VA health care as a high- risk area in 2015, we reported that ambiguous VA policies led to inconsistent processes at local VA medical facilities, which may have posed risks for veterans’ access to VA health care. Since then, we highlighted the inconsistent application of policies in two recent reports examining mental health and primary care access at VA medical facilities in 2015 and 2016, respectively. In both reports, we found wide variation in the time that veterans waited for primary and mental health care, which was in part caused by a lack of clear, updated policies for appointment scheduling; therefore, we recommended that VA update these policies. These ambiguous policies contributed to errors made by appointment schedulers, which led to inconsistent and unreliable wait-time data. For mental health, we also found that two policies conflicted, leading to confusion among VA medical center staff as to which wait-time policy to follow. In 2015, VA resolved this policy conflict by revising its mental health handbook, but other inconsistent applications of mental health policy have not yet been addressed, such as our recommendation to issue guidance about the definitions used to calculate veteran appointment wait times, and communicate any changes to those definitions within and outside VHA. 2017 assessment of VA’s progress. Based on actions taken since 2015, VA has partially met our criteria for removal from the High-Risk List for this area of concern for leadership commitment and action plan. VA has partially met the leadership commitment criterion because it established a framework for developing and reviewing policies—with the goal of ensuring greater consistency and clarity—and set goals for making the policy-development process more efficient. VA has partially met the action plan criterion for this high-risk area of concern because its action plan described an analysis of the root causes of problems related to ambiguous policies and inconsistent processes, an important aspect of an action plan. However, VA has not met our criteria for removal from the High-Risk List for capacity, monitoring, and demonstrated progress for this area of concern because it has not addressed gaps that exist between its stated goals and available resources, addressed inconsistent application of policies at the local level, or demonstrated that its actions are linked to identified root causes. Summary of concern. In our 2015 high-risk report, we found that VA had problems holding its facilities accountable for their performance because it relied on self-reported data from facilities, its oversight activities were not sufficiently focused on compliance, and it did not routinely assess policy implementation. We continued to find a lack of oversight in our October 2015 review of the efficiency and timeliness of VA’s primary care. For example, we found inaccuracies in VA’s data on primary care panel sizes, which are used to help medical centers manage their workload and ensure that veterans receive timely and efficient care. We found that while VA’s primary care panel management policy required facilities to ensure the reliability of their panel size data, it did not assign responsibility for verifying data reliability to regional- or national-level officials or require them to use the data for monitoring purposes. As a result, VA could not be assured that local panel size data were reliable, or know whether its medical centers had met VA’s goals for efficient, timely, and quality care. We recommended that VA incorporate an oversight process in its primary care panel management policy that assigns responsibility, as appropriate, to regional networks and to VA’s central office for verifying and monitoring panel sizes. 2017 assessment of VA’s progress. VA has partially met the leadership commitment criterion for this area of concern because it established a high-level governance structure and adopted a new model to guide the department’s oversight and accountability activities. However, VA has not met our criteria for removal from the High-Risk List for capacity, action plan, monitoring, or demonstrated progress for this area of concern because the department continues to rely on existing processes that contribute to inadequate oversight and accountability. Summary of concern. In our 2015 high-risk report, we identified limitations in the capacity of VA’s existing IT systems, including the outdated, inefficient nature of certain systems and a lack of system interoperability as contributors to VA’s IT challenges related to VA health care. We have continued to report on the importance of VA working with the Department of Defense to achieve electronic health record interoperability. In August 2015, we reported on the status of these interoperability efforts and noted that the departments had engaged in several near-term efforts focused on expanding interoperability between their existing electronic health record systems. However, we were concerned by the lack of outcome-oriented goals and metrics that would more clearly define what VA and the Department of Defense aim to achieve from their interoperability efforts. Accordingly, we recommended that the departments establish a time frame for identifying outcome- oriented metrics and define related goals for achieving interoperability. In February 2017, we reported that VA has begun to define an approach for identifying outcome-oriented metrics focused on health outcomes in selected clinical areas, and it also has begun to establish baseline measurements. We intend to continue monitoring the departments’ efforts to determine how these metrics define and measure the results achieved by interoperability between the departments. 2017 assessment of VA’s progress. VA has partially met our leadership commitment criterion by involving top leadership from VA’s Office of Information & Technology in this area of concern, but it has not met our four remaining criteria for removing IT challenges from the High-Risk List. For example, VA has not demonstrated improvement in several capacity actions, such as establishing specific responsibilities for its new functions, improving collaboration between internal and external stakeholders, and addressing skill gaps. VA also needs to conduct a root cause analysis that would help identify and prioritize critical actions and outcomes to address IT challenges. Summary of concern. When identifying this area of concern in our 2015 high-risk report, we described several gaps in VA’s training, as well as burdensome training requirements. We have continued to find these issues in our subsequent work. For example, in our December 2016 report on VHA’s human resources (HR) capacity, we found that VA’s competency assessment tool did not address two of the three personnel systems under which VHA staff may be hired. We recommended that VHA (1) develop a comprehensive competency assessment tool for HR staff that evaluates knowledge of all three of VHA’s personnel systems and (2) ensure that all VHA HR staff complete it so that VHA may use the data to identify and address competency gaps among HR staff. Without such a tool, VHA will have limited insights into the abilities of its HR staff and will be ill-positioned to provide necessary support and training. 2017 assessment of VA’s progress. VA has not met any of our criteria for removing this area of concern from the High-Risk List. VA intends to establish a comprehensive health care training management policy and a mandatory annual training process; however, as of December 2016, VA officials said they had not begun drafting a new policy to replace an outdated document from 2002 that contains training requirements that are no longer relevant. The high-level nature of the descriptions in the action plan and lack of action to update outdated policies and set goals for improving training shows that VA lacks leadership commitment to address the concerns that led to our inclusion of this area in the 2015 high-risk report. Summary of concern. In our 2015 high-risk report, we described gaps in the availability of data needed for VA to identify the resources it needs and ensure they are effectively allocated across VA’s health care system as contributors to our concern about unclear resource needs and allocation priorities. We have continued to report on this concern. For example, in our September 2016 report on VHA’s organizational structure, we found that VA devoted significant time, effort, and funds to generate recommendations for organizational structure changes intended to improve the efficiency of VHA operations. However, the department then either did not act or acted slowly to implement the recommendations. Without robust processes for evaluating and implementing recommendations, there was little assurance that VHA’s delivery of health care to the nation’s veterans would improve. We recommended that VA develop a process to ensure that it evaluates organizational structure recommendations resulting from internal and external reviews of VHA. This process should include documenting decisions and assigning officials or offices responsibility for ensuring that approved recommendations are implemented. We concluded that such a process would help VA ensure that it is using resources efficiently, monitoring and evaluating implementation, and holding officials accountable. 2017 assessment of VA’s progress. VA’s actions have partially met our criterion for leadership commitment but not met the other four criteria for removing this area of concern from the High Risk List. VA’s planned actions do not make clear how VHA, as the agency managing VA health care, is or will be incorporated into VA’s new framework for the strategic planning and budgeting process. It is also not clear how the framework will be communicated and reflected at the regional network and medical center levels. VA also has not identified what resources may be necessary to establish and maintain new functions at the national and local levels, or established performance measures based on a root cause analysis of its unclear resource needs and allocation priorities. Since we added VA health care to our High-Risk List in 2015, VA’s leadership has increased its focus on implementing our prior recommendations, but additional work is still needed. Between January 2010 and February 2015 (when we first designated VA health care as a high-risk area), we made 178 recommendations to VA related to VA health care. When we made our designation in 2015, the department only had implemented about 22 percent of them. Since February 2015, we have made 74 new recommendations to VA related to VA health care, for a total of 252 recommendations from January 1, 2010 through February 15, 2017 (when we issued the 2017 high-risk report). VA has implemented about 50 percent of these recommendations. However, there continue to be more than 100 open recommendations related to VA health care, almost a quarter of which have remained open for 3 or more years. We believe that it is critical that VA implement our recommendations not only to remedy the specific weaknesses we previously identified, but because they may be symptomatic of larger underlying problems that also need to be addressed. Since the 2015 high-risk report, we have made new recommendations to VA relating to each of the five areas of concern. (See table 1.) VA has taken an important step toward addressing our criteria for removal from the High-Risk List by establishing the leadership structure necessary to ensure that actions related to the High-Risk List are prioritized within the department. It is imperative, however, that VA demonstrate strong leadership support as it continues its transition under a new administration, address weaknesses in its action plan, and continue to implement our open recommendations. As a new administration sets its priorities, VA will need to integrate those priorities with its high-risk-related actions, and facilitate their implementation at the local level through strategies that link strategic goals to actions and guidance. In its action plan, VA separated its discussion of department-wide initiatives, like MyVA, from its description of High-Risk List mitigation strategies. We do not view high-risk mitigation strategies as separate from other department initiatives; actions to address the High-Risk List can, and should be, integrated in VA’s existing activities. VA’s action plan did not adequately address the concerns that led to the high-risk designation because it lacked root cause analyses for most areas of concern, as well as clear metrics and identified resources needed for achieving VA’s stated outcomes. This is especially evident in VA’s plans to address the IT and training areas of concern. In addition, with the increased use of community care programs, it is imperative that VA’s action plan include a discussion of the role of community care in decisions related to policies, oversight, IT, training, and resource needs. VA will also need to demonstrate that it has the capacity to sustain efforts by devoting appropriate resources—including people, training, and funds—to address the high-risk challenges we identified. Until VA addresses these serious underlying weaknesses, it will be difficult for the department to effectively and efficiently implement improvements addressing the five areas of concern that led to the high-risk designation. We will continue to monitor VA’s institutional capacity to fully implement an action plan and sustain needed changes in all five of our areas of concern. To the extent we can, we will continue to provide feedback to VA officials on VA’s action plan and areas where they need to focus their attention. Additionally, we have ongoing work focusing on VA health care that will provide important insights on progress, including the policy development and dissemination process, implementation and monitoring of VA’s opioid safety, Veterans Choice Program implementation, physician recruitment and retention, and processes for enrolling veterans in VA health care. Finally, we plan to also continue to monitor VA’s efforts to implement our recommendations and recommendations from other reviews such as the Commission on Care. To this end, we believe that the following GAO recommendations require VA’s immediate attention: improving oversight of access to timely medical appointments, including the development of wait-time measures that are more reliable and not prone to user error or manipulation, as well as ensuring that medical centers consistently and accurately implement VHA’s scheduling policy. improving oversight of VA community care to ensure—among other things—timely payment to community providers. improving planning, deployment, and oversight of VA/VHA IT systems, including identifying outcome-oriented metrics and defining goals for interoperability with DOD. ensuring that recommendations resulting from internal and external reviews of VHA’s organizational structure are evaluated for implementation. This process should include the documentation of decisions and assigning officials or offices responsibility for ensuring that approved recommendations are implemented. Moreover, it is critical that Congress maintain its focus on oversight of VA health care to help address this high-risk area. Congressional committees responsible for authorizing and overseeing VA health care programs held more than 70 hearings in 2015 and 2016 to examine and address VA health care challenges. As VA continues to change its health care service delivery in the coming years, some changes may require congressional action—such as VA’s planned consolidation of community care programs after the Veterans Choice Program expires. Sustained congressional attention to these issues will help ensure that VA continues to improve its management and delivery of health care services to veterans. Chairman Isakson, Ranking Member Tester, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were Malissa G. Winograd (Analyst-in-Charge), Jennie Apter, Jacquelyn Hamilton, and Alexis C. MacDonald. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. VA Health Care: Actions Needed to Ensure Medical Facility Controlled Substance Inspection Programs Meet Agency Requirements. GAO-17-242. Washington, D.C.: February 15, 2017. Veterans Affairs Information Technology: Management Attention Needed to Improve Critical System Modernizations, Consolidate Data Centers, and Retire Legacy Systems. GAO-17-408T. Washington, D.C.: February 7, 2017. Veterans Health Administration: Management Attention Is Needed to Address Systemic, Long-standing Human Capital Challenges. GAO-17-30. Washington, D.C.: December 23, 2016. VA Health Care: Improved Monitoring Needed for Effective Oversight of Care for Women Veterans. GAO-17-52. Washington, D.C.: December 2, 2016. Veterans Health Care: Improvements Needed in Operationalizing Strategic Goals and Objectives. GAO-17-50. Washington, D.C.: October 21, 2016. VA Health Care: Processes to Evaluate, Implement, and Monitor Organizational Structure Changes Needed. GAO-16-803. Washington, D.C.: September 27, 2016. Veterans’ Health Care: Improved Oversight of Community Care Physicians’ Credentials Needed. GAO-16-795. Washington, D.C.: September 19, 2016. VA IT Management: Organization Is Largely Centralized; Additional Actions Could Improve Human Capital Practices and Systems Development Processes. GAO-16-403. Washington, D.C.: August 17, 2016. Veterans Affairs: Sustained Management Attention Needed to Address Numerous IT Challenges. GAO-16-762T. Washington, D.C.: June 22, 2016. VA’s Health Care Budget: In Response to a Projected Funding Gap in Fiscal Year 2015, VA Has Made Efforts to Better Manage Future Budgets. GAO-16-584. Washington, D.C.: June 3, 2016. Veterans Crisis Line: Additional Testing, Monitoring, and Information Needed to Ensure Better Quality Service. GAO-16-373. Washington, D.C.: May 26, 2016. Veterans’ Health Care: Proper Plan Needed to Modernize System for Paying Community Providers. GAO-16-353. Washington, D.C.: May 11, 2016. VA Health Care: Actions Needed to Improve Newly Enrolled Veterans’ Access to Primary Care. GAO-16-328. Washington, D.C.: March 18, 2016. DOD and VA Health Care: Actions Needed to Help Ensure Appropriate Medication Continuation and Prescribing Practices. GAO-16-158. Washington, D.C.: January 5, 2016. VA Mental Health: Clearer Guidance on Access Policies and Wait-Time Data Needed. GAO-16-24. Washington, D.C.: October 28, 2015. VA Primary Care: Improved Oversight Needed to Better Ensure Timely Access and Efficient Delivery of Care. GAO-16-83. Washington, D.C.: October 8, 2015. VA Health Care: Oversight Improvements Needed for Nurse Recruitment and Retention Initiatives. GAO-15-794. Washington, D.C.: September 30, 2015. Electronic Health Records: Outcome-Oriented Metrics and Goals Needed to Gauge DOD’s and VA’s Progress in Achieving Interoperability. GAO-15-530. Washington, D.C.: August 13, 2015. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | VA operates one of the largest health care delivery systems in the nation, including 168 medical centers and more than 1,000 outpatient facilities organized into regional networks. Enrollment in the VA health care system has grown significantly, from 7.9 million in fiscal year 2006 to almost 9 million in fiscal year 2016. Over that same period, VA's Veterans Health Administration's total budgetary resources have increased substantially, from $37.8 billion in fiscal year 2006 to $91.2 billion in fiscal year 2016. Since 1990, GAO has regularly updated the list of government operations that it has identified as high risk due to their vulnerability to fraud, waste, abuse, and mismanagement, or the need for transformation to address economy, efficiency, or effectiveness challenges. VA health care was added as a high-risk area in 2015 because of concerns about VA's ability to ensure the timeliness, cost-effectiveness, quality, and safety of veterans' health care. GAO assesses High-Risk List removal against five criteria: (1) leadership commitment, (2) capacity, (3) action plan, (4) monitoring, and (5) demonstrated progress. This statement, which is based on GAO's February 2017 high-risk report, addresses (1) actions VA has taken over the past 2 years to address the areas of concern that led GAO to designate VA health care as high risk, (2) the number of open GAO recommendations related to VA health care, and (3) additional actions VA needs to take to address the concerns that led to the high-risk designation. The Department of Veterans Affairs (VA) has taken action to partially meet two of the five criteria GAO uses to assess removal from the High-Risk List (leadership commitment and an action plan), but it has not met the other three (agency capacity, monitoring efforts, and demonstrated progress). Specifically, VA officials have taken leadership actions such as establishing a task force, working groups, and a governance structure for addressing the issues that led to the high-risk designation. VA provided GAO with an action plan in August 2016 that acknowledged the deep-rooted nature of the five areas of concern GAO identified: (1) ambiguous policies and inconsistent processes; (2) inadequate oversight and accountability; (3) information technology challenges; (4) inadequate training for VA staff; and (5) unclear resource needs and allocation priorities. Although VA's action plan outlined some steps VA plans to take over the next several years, several sections were missing analyses of the root causes of the issues, resources needed, and clear metrics to measure progress. Also of concern are the more than 100 open recommendations GAO has made between January 2010 and February 2017 related to VA health care, almost a quarter of which have been open for 3 or more years. Since February 2015, GAO has made 74 new recommendations relating to the areas of concern. To address its high-risk designation, additional actions are required of VA, including: (1) demonstrating stronger leadership support as it continues its transition under a new administration; (2) developing an action plan to include root cause analyses for each area of concern, clear metrics to assess progress, and the identification of resources for achieving stated outcomes; and (3) implementing GAO's recommendations, not only to remedy the specific weaknesses identified, but because they may be symptomatic of larger underlying problems that also need to be addressed. Until VA addresses these serious underlying weaknesses, it will be difficult for the department to effectively and efficiently implement improvements addressing the five areas of concern that led to the high-risk designation. |
Employment. Social Security Administration (SSA) data show that from 2008 to 2009, the total number of people employed in American Samoa declined 19 percent (from 19,171 to 15,434) and that over the entire period from 2006 to 2009, employment declined 14 percent (from 17,852 to 15,434, with a peak of 19,171 in 2008). Data from 2010 on total employment are not yet available. Questionnaire responses from the tuna canning industry show that employment of their workers—most of whom are foreign workers from independent Samoa—dropped by 55 percent from 2009 to 2010, reflecting the September 2009 closure of one cannery and layoffs in the remaining cannery. In addition, we estimated that from 2,000 to 3,000 temporary federal jobs funded beginning in June 2009 will end when federal funding is no longer available. Private sector officials said minimum wage was one of a number of factors, including the high cost of goods and utilities, making it difficult to do business in American Samoa. American Samoa government data show the minimum wage increases would raise government payroll costs for its employees by about 1 percent (about $9 million) over 7 years, and the American Samoa governor and other public officials said they supported a return to biennial reviews of minimum wage in American Samoa or other alternatives to the scheduled increases. Inflation-adjusted earnings of those employed. Earnings data from SSA and consumer price data show that from 2008 to 2009, average inflation-adjusted earnings of those employed fell by 5 percent. This resulted from a decrease in average earnings of 2 percent, and an increase in prices of 3 percent. For the period from 2006 to 2009, average inflation-adjusted earnings fell by 11 percent. This resulted from a rise in average annual earnings of about 5 percent while local prices rose by about 18 percent. Although earnings data do not allow for a direct comparison of average and minimum wage annual earnings or for tracking the earnings of workers who lost their jobs, the hourly wage of minimum wage workers increased by more than inflation. The inflation-adjusted earnings of minimum wage cannery workers who retained their jobs and work hours rose by about 8 percent from 2008 to 2009 and by about 23 percent for the entire period from 2006 to 2009. Tuna canning wages. Without a minimum wage increase in American Samoa in 2010, there was no increase in the median wage of tuna canning workers—in both 2009 and 2010, the median tuna canning worker wage was $4.76. Consistent with our last report, from 2007 to 2010, the median wage among workers in the tuna canning industry employed by questionnaire respondents rose by $1.46 (44 percent). Based on questionnaire responses about workers’ wages as of June 2010, the future minimum wage increases would affect the wages of 99 percent of current workers in the tuna canning industry by the time the minimum wage reaches $7.25, increasing the average annual cost per worker in 2016 by $4,660 since June 2010. Tuna canning employer actions. The two employers in the tuna canning industry reported in our questionnaire that they had taken cost-cutting actions from June 2009 to June 2010, including laying off workers, reducing overtime hours, freezing hiring, decreasing benefits, temporarily closing, reducing operating capacity or services, and raising prices, among other actions. They reported plans to take the same types of cost-cutting actions by early 2012, including laying off additional employees. They attributed most of their past and planned actions largely to the minimum wage increases and did so more often than they attributed their actions largely to some other factors, such as transportation and shipping costs and changes in business taxes and fees. However, they said a decrease in the number of customers, such as wholesale customers, was another important factor affecting their actions. Tuna canning industry analysis. In addition to the minimum wage increases, cannery officials also expressed concern about American Samoa’s dwindling competitive advantage in the global tuna canning industry and said that current operations in American Samoa were not competitive with other models. Analysis of alternate models available to the industry suggests that moving tuna cannery operations— including unloading, loining (cleaning, cooking, and cutting), and canning fish—from American Samoa to another tariff-free country with lower labor costs would significantly reduce cannery operating costs. However, given that tuna facilities in American Samoa are among few in the United States that can meet the requirements of U.S. government contracts, many of which require U.S.-sourced and processed fish, maintaining some operations in American Samoa would allow the facility to continue to compete for these contracts. Despite the advantages of moving some operations to other countries, the remaining cannery’s lease obligation through 2013 and the cost of building new facilities elsewhere may pose obstacles to near-term relocation. In addition, a new tuna facility operator has hired a small number of workers formerly employed by the cannery that closed, but it is unclear how many additional workers they will hire. Tuna canning and other worker views. Some workers said they had looked forward to the 2010 minimum wage increase and were disappointed to see the increase delayed. However, more workers, particularly tuna canning workers, expressed concern over job security than favored a minimum wage increase with the potential for subsequent layoffs. See table 2 for key findings and appendix III for detailed findings and tables on American Samoa. Employment. From 2008 to 2009, the total number of people employed fell by about 13 percent, according to CNMI government tax data. For the entire period from 2006 to 2009, the number employed fell 35 percent. The decrease largely reflected the early 2009 closure of the CNMI’s last remaining garment factories, which employed many foreign workers. In addition, we estimated that less than 1,000 temporary federal jobs funded beginning in June 2009 will end when federal funding is no longer available. In the tourism industry, employment among GAO questionnaire respondents fell 8 percent from 2009 to 2010 and fell 14 percent over the entire period from 2007 to 2010. Private sector employers reported in discussion groups some layoffs and hiring freezes, and they said minimum wage increases imposed additional costs during a time in which multiple factors made it difficult to operate. They also expressed concerns about the departure of the garment industry, decline of the tourism industry, population loss, and changes to immigration law. According to CNMI government payroll data, about 17 percent of government workers are paid at or below $7.25 and would be affected by the minimum wage increases by 2016. Inflation-adjusted earnings of those employed. From 2008 to 2009, based on CNMI government tax data and consumer price data, inflation-adjusted average earnings of those employed rose by 3 percent. This resulted from a 7 percent increase in average earnings, with a 3.5 percent increase in prices. Over the entire period from 2006 to 2009, average inflation-adjusted earnings remained largely unchanged, with a slight drop of .5 percent. This resulted from a 19 percent increase in average earnings and a 19.5 percent increase in prices. Although earnings data do not allow for a direct comparison of average and minimum wage annual earnings or for tracking the earnings of workers who lost their jobs, the hourly wage of minimum wage workers increased by more than inflation. The inflation-adjusted earnings of minimum wage workers who retained their jobs and work hours rose by about 9 percent from 2008 to 2009, and by about 25 percent for the entire period from 2006 to 2009. Tourism wages. From June 2007 to 2010, a period that included three minimum wage increases, the median wage among workers employed by CNMI tourism industry questionnaire respondents rose by 95 cents (26 percent). In addition, in the tourism industry, the 2007 through 2010 wage increases narrowed the wage gap between the lowest and highest paid employees of questionnaire respondents by 52 percent. Employers said in interviews that the wage compression had lowered morale for more senior employees. Based on questionnaire responses about workers’ wages as of June 2010, the future minimum wage increases would affect the wages of 95 percent of current workers in the CNMI tourism industry by the time the minimum wage reaches the U.S. minimum wage of $7.25, increasing the average annual cost per worker in 2016 by $4,707 since June 2010. Tourism employer actions. Hotel and other employers in the CNMI tourism industry responding to our questionnaire reported having taken cost-cutting actions from June 2009 to June 2010, including reducing hours, freezing hiring, decreasing benefits, and raising prices of goods or services. Employers also reported plans to take the same types of cost-cutting actions by early 2012, as well as laying off workers. Few employers—weighted by numbers of employees— attributed their past actions largely to the minimum wage increases, and one half or less did so for each of the planned actions. Employers noted other factors, including changes in immigration law and a decrease in the number of customers, that largely contributed to their actions. Hotel industry analysis. Due to competition from other vacation destinations and to declining visitor arrivals, CNMI hotels have generally absorbed minimum wage costs rather than raise room rates. Both visitor arrivals and flight seats available to the CNMI declined from 2005 to 2010, and the greatest declines in both by country were from Japan—the CNMI’s largest tourism market. Industry data show that since 2006 the hotel occupancy rate has remained between 58 and 64 percent, while inflation-adjusted room rates declined by about 12 percent from 2006 to 2009. If observed trends continue, scheduled minimum wage increases will increase the share of hotels’ total operating costs attributable to payroll from approximately 29 percent of operating costs in 2010 (with minimum wage increases representing about 1 percent of total operating costs) to 34 percent in 2016 (with minimum wage increases representing about 8 percent of the total). In discussion groups, some hotel and other tourism employers and managers expressed concern about the minimum wage increases, but others said the minimum wage increases were needed and manageable and that the primary difficulty was the CNMI tourism industry’s general decline. Tourism and other worker views. Workers participating in our CNMI discussion groups expressed mixed views regarding the minimum wage increases and said they would like pay increases but were concerned about losing jobs and work hours. Participants said they wanted to receive the pay increases to help meet rising prices, including for utilities and consumer goods. However, they said they had observed that while some workers received pay increases, others lost jobs or work hours. See table 3 for key findings and appendix IV for detailed findings and tables on the CNMI. Since the minimum wage increases began in 2007, both American Samoa and the CNMI have experienced substantial decreases in employment, largely resulting from the loss of one of two tuna canneries in American Samoa and, in the CNMI, loss of the garment industry. The local economies of both these insular areas differ from the U.S. economy in a number of ways, including that the percentage of workers paid the minimum wage is much higher than in the U.S. 50 states. In American Samoa, tuna canning employers responding to our questionnaire attributed most past actions such as layoffs, work hour reductions, and hiring freezes largely to the minimum wage increases. CNMI tourism employers responding to our questionnaire also took actions including reducing hours, freezing hiring, and decreasing benefits. Few attributed past actions and one half or less attributed each of the planned actions to the minimum wage increases. In both areas, employers in discussion groups said the minimum wage increases were one of multiple factors making it difficult to conduct business. The economic declines in American Samoa and the CNMI are substantial, and both areas face budget shortfalls that may threaten their ability to fund public services and make investments in support of future economic development. In addition, the expiration of federal assistance and temporary jobs funded through the Recovery Act and other programs will likely expose greater challenges. Both areas have tried to identify opportunities for new industries and growth, but so far neither has succeeded in attracting significant new investment. Identifying new growth opportunities and maintaining needed infrastructure and services in the meantime will require substantial effort by the private sector and by both the local and federal governments. We provided a draft of this report to officials in DOC, DOI, DOL, SSA, and in the governments of American Samoa and the CNMI for review and comment. We received written comments from DOC, the American Samoa government, and the CNMI government, which are reprinted in appendixes VI, VII, and VIII, respectively. We also received technical comments from DOI and DOL, which we incorporated as appropriate. SSA had no comments. We shared excerpts of the draft with several private sector entities and experts and incorporated their comments as appropriate. Following are summaries of the written comments from DOC, the American Samoa government, and the CNMI government, with our responses. Department of Commerce. In its written comments, DOC said it appreciated the opportunity to provide comments, and it provided additional technical comments. American Samoa. In its written comments, the American Samoa government generally agreed with our findings. However, it stated that employment losses and other aspects of economic decline in American Samoa are greater than the report suggests and constitute an economic depression. It stated that application of the U.S. minimum wage to American Samoa, pursuant to the scheduled increases mandated by Congress, continues to have devastating effects on American Samoa’s economy and labor market. The American Samoa government also compared the economy and minimum wage increases in American Samoa to those in the U.S. states and concluded that the minimum wage increases caused severe employment decline in American Samoa. In addition, the American Samoa government recommended in its written comments and in a January 2011 letter that GAO explore alternative methods for setting minimum wage levels in American Samoa and issue recommendations. The government provided several alternative methods for consideration. While we considered these suggestions and summarized them in the report, our research objectives and methodology were developed in response to the legislative mandate and in discussions with congressional requesters. These objectives and methodology were designed to provide sufficient information and analysis to support congressional deliberation on minimum wage in American Samoa and the CNMI. In its written comments, the American Samoa government asked GAO or other federal entities to consider the following recommendations: terminate increases in the minimum wage immediately in American Samoa; conduct a thorough analysis of why adverse economic effects of the minimum wage increases were greater in American Samoa than in the United States; and determine procedures for addressing minimum wage in American Samoa in a way that avoids future economic disasters. Appendix VII provides our more detailed evaluation of the American Samoa government’s letter. CNMI. In its written comments, the CNMI government said the draft report fairly characterized current conditions in the CNMI and stated that the findings were similar to those in our last report. It noted that CNMI businesses were struggling to survive as they faced multiple factors, including the contracting economy, uncertainties surrounding the application of U.S. immigration law, rising energy costs, and the global recession. It raised several questions and concerns regarding the report methodology. First, the CNMI government questioned that for some key past and future actions, such as reducing regular work hours and freezing hiring, no CNMI employers attributed the actions to the minimum wage increases. We note that, as stated in the report, we present the weighted percentage of employers who attributed each action to the minimum wage increases “to a large extent” (not those who attributed the action to the minimum wage increases “to a small extent” or “to a moderate extent”). The CNMI government cited several of the limitations of the tourism industry questionnaire, as described in this report, and recommended that the reporting method be improved to gather a clearer picture regarding minimum wage increases and to improve data integrity. However, for any questionnaire based on self- reported data, we cannot eliminate the possibility that some employers’ views of the minimum wage increases may have influenced their responses. In addition, the CNMI government stated that the analyses of CNMI residents’ living standards should be strengthened, as required by congressional mandate. Although the original mandate (2009) specifically required us to study minimum wage effects on living standards, the current mandate (2010) does not. However, the report includes qualitative findings related to living standards based on discussion groups with employers and with workers, as well as quantitative findings on the inflation-adjusted earnings of average and minimum wage workers. Last, the CNMI government stated that it appreciated the delay in the minimum wage but was concerned about other factors. It recommended that this and future reports provide recommendations to Congress, federal agencies, and the local government on managing these challenges. Appendix VIII provides our more detailed evaluation of the CNMI government’s letter. We are sending copies of this report to interested congressional committees. We also will provide copies of this report to the U.S. Secretaries of Commerce, the Interior, Labor, to the Commissioner of Social Security, and to the Governors of American Samoa and the CNMI. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov, or Tom McCool at (202) 512-2642 or mccoolt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. This report updates our 2010 report on American Samoa and the Commonwealth of the Northern Mariana Islands (CNMI) with an additional year of information and describes, since the minimum wage increases began, (1) employment and earnings, and (2) the status of key industries. To describe employment and earnings, we analyzed earnings data from the Social Security Administration (SSA) and tax data from the CNMI government, and we adjusted the earnings data using Consumer Price Index (CPI) data for each area. We also analyzed responses from GAO’s questionnaire of large employers in the American Samoa tuna canning and CNMI tourism industries. To describe the status of key industries, we collected responses through the industry questionnaire. For both objectives, we conducted discussion groups with employers and workers and interviews with public officials. We provide additional information on each data source below. In preparing this report, we interviewed officials from the U.S. Departments of the Interior (DOI), Commerce (DOC), and Labor (DOL), as well as from SSA. We reviewed relevant reports and data from DOL and other U.S. government sources. We also reviewed U.S. minimum wage laws and other relevant laws and regulations. We did not focus on the extent to which laws were properly enforced or implemented, although we considered enforcement as appropriate. The scope of our study also does not include workers in the underground economy. As noted previously, the federal sources generally used to generate data on wages, occupations, and employment status for the United States, including the Current Population Survey and the Current Employment Statistics program, do not cover these insular areas. Because these data sources were unavailable, we collected our own data in each area. During visits to American Samoa and the CNMI in October 2010, we conducted interviews and discussion groups with government officials, employers in a range of industries and sizes, other private sector representatives, workers, and community members to obtain views and information on the minimum wage increases and related topics. In each area, we established e-mail accounts to obtain comments from the public. We also collected detailed data from large employers in the American Samoa tuna canning and CNMI tourism industries through a questionnaire, as described below. In American Samoa, we visited the island of Tutuila and interviewed officials in the Office of the Governor, the Department of Commerce, the Department of Human Resources, the Department of the Treasury, the Department of Program Planning and Budget Development, the Office of Samoan Affairs, the Department of Legal Affairs, and other American Samoa agencies. We met with officials of DOI’s Office of Insular Affairs. We also interviewed representatives of the private sector, including representatives from the tuna canneries, and workers. In the CNMI, we visited the island of Saipan and interviewed officials in the Office of the Governor, the Department of Commerce, and the Marianas Visitors Authority. We were not able to meet with some other CNMI agencies, and the Office of the Governor said the officials were not available because they had just returned to work following a government shutdown. We met with officials of DOI’s Office of Insular Affairs and of DOL. We also interviewed representatives of the private sector, including representatives from hotels, and workers. In addition, we held a phone interview with the Tinian Chamber of Commerce. In American Samoa and the CNMI in 2010, we collected updated data on employment, wage structure, past and planned employer actions, and related topics covering 2009 and 2010 from employers in key industries who had responded to our 2009 questionnaire. We weighted employers’ responses by the number of workers they employed. Questionnaire responses are limited to the American Samoa tuna canning industry and the CNMI tourism industry and are not representative of all workers and employers in each industry or each area. For our 2009 questionnaire, we defined a large employer as one that employed 50 or more workers in recent years. The employers selected to receive the 2009 questionnaire comprised for-profit, not-for-profit, and public sector employers. We sent the questionnaire only to employers with 50 or more workers because we did not have sufficiently reliable frames from which to draw a probability sample of employers and because we could contact only a limited number of employers in each area, given available resources. By limiting our questionnaire to the largest employers, we were able to concentrate data collection efforts on those who employed a disproportionately large percentage of the workforce. In accordance with other federal employment surveys and with our 2009 questionnaire, our 2010 large-employer questionnaire asked for wage data for the pay period containing June 12. The questionnaire asked separately for data regarding workers paid an hourly wage and workers paid an annual salary. The questionnaire also included detailed questions about changes in benefits, about employers’ past and possible future actions, and about the extent to which employers attributed these actions to past and future minimum wage increases. (The questionnaire is reproduced in app. V.) Because our questionnaire collected wage data as of June 12 of each year, the data do not reflect the CNMI’s September 30, 2010, minimum wage increase. Before sending the 2009 questionnaire to employers, we pretested it over the phone with three employers in the CNMI and two in American Samoa to make sure that the questions were clear and comprehensive, the data were readily obtainable, and the questionnaire did not place an undue burden on employers. While we eliminated some 2009 questions for the 2010 questionnaire, revisions to remaining questions were minor and did not require additional pretesting. Most employers received the questionnaire by e-mail in an attached Microsoft Word form that they could return electronically after marking checkboxes or entering responses in open-answer boxes. Employers returned questionnaires by e-mail, mail, or fax. We conducted nonresponse follow-up in person and by phone while in the insular areas. We also contacted nonrespondents by e-mail and phone. In addition, we contacted respondents to clarify responses and request any missing data. Because of the lack of data on the entire workforce, it is difficult to precisely state the percentage of the workforce that our questionnaire represents. In 2010, both of the American Samoa employers that received our questionnaire provided responses—including the one remaining tuna cannery and a closely related business that manufactures and supplies cans—resulting in a response rate of 100 percent (both in terms of respondents and employees). Based on Social Security data, our respondents in 2010 represented about 12 percent of the total workforce in 2009. Based on U.S. Economic Census data, our respondents represented about 17 percent of the private sector workforce in 2007. In all, American Samoa questionnaire respondents provided hourly wage data on a total of 1,869 workers as of June 2010. In the CNMI, 12 of 14 employers completed the questionnaire. We confirmed that one employer had closed, and thus we did not count the employer in the final unweighted response rate of 92 percent (12 of 13). Respondents included hotels and other employers in the tourism sector, such as tour operators. Based on CNMI tax data, our respondents represented about 6 percent of the total workforce in 2009. Based on Economic Census data, our respondents represented about 8 percent of the total private sector workforce in 2007. Our hotel respondents represented about 48 percent of workers in the CNMI accommodations industry, also based on Economic Census data. In all, CNMI questionnaire respondents provided hourly wage data on a total of 1,576 workers as of June 2010. In reporting the percentages for questionnaire responses throughout our report, we weighted each percentage to reflect the proportion of workers employed by the responding employers relative to all workers employed by all questionnaire respondents. As a result, the responses of larger employers affect our findings more than those of smaller employers. We determined the number of employees at each employer by summing the number of hourly and salaried workers that employers reported in questionnaire responses. In addition to asking a direct question about number of employees, the questionnaire asked respondents to complete a separate table listing the number of employees at each wage or salary level. Separate tables were required for hourly wage and salaried workers. To apply the weights, we cross-multiplied the number of employees by the employer response, then divided by the total number of employees in the sample. For example, if three of five employers attributed an action to the minimum wage to a moderate extent, the unweighted response would be 60 percent. However, if those three employers represented 300 of 400 employees, the weighted response that we report would be 75 percent. For our analyses of the effect of minimum wage increases, we obtained information on earnings and employment for both hourly wage and salaried workers during the pay periods that included June 12, 2010, from our questionnaire. We analyzed these responses in conjunction with data we collected from the same employers in 2009 regarding the pay periods that included June 12, 2007, 2008, and 2009. For hourly wage workers, respondents were asked to provide the number of employees paid at each wage rate, and the number of both regular and overtime hours worked during the pay period. For salaried workers, respondents were asked the number of full-time and part-time workers paid at each salary level. However, we focused on the effect of minimum wage increases on hourly wage workers. Hourly wage workers represented about 98 percent of American Samoan workers and 90 percent of workers in the CNMI. One employer, which represented fewer than 20 employees, did not provide complete information on the distribution of hourly wage employees and so was excluded from analyses requiring that information. To determine the number of workers affected by each minimum wage increase, we assumed that all workers employed by questionnaire respondents were legally required to receive the minimum wage. If some are not covered or are exempt, the minimum wage increases would affect fewer workers. After recording the questionnaire data, we verified all keypunched records by comparing them with the corresponding questionnaires and corrected the errors we found. Less than 0.5 percent of the data items we checked had random keypunch errors that would not have been corrected during data processing. Analysis programs were also independently verified. However, we did not independently verify that the wage and other information provided to us was correct. The questionnaire responses cannot be used to make inferences about all employers and workers in each insular area, or about all employers and workers in the covered industries. First, because the lists of employers that received the questionnaire were intended to include only those in the American Samoa tuna canning and CNMI tourism industries who had responded to our 2009 questionnaire (with more than 50 employees), the lists were not representative of all employers or of all employers in those industries. Second, we were unable to survey employers that had closed between 2007 and our questionnaire date, including those in the CNMI garment industry. Third, some nonresponse bias may exist in some of the questionnaire responses, since characteristics of questionnaire respondents may differ from those of nonrespondents and nonrecipients in ways that affect the responses (e.g., if those that employ a larger number of workers would have provided different responses than those that employ a smaller number). Last, it is possible that some employers’ views of the minimum wage increases may have influenced their responses. In addition, the one tuna cannery in American Samoa employed a large percentage of workers employed by the two questionnaire respondents; as a result, this employer’s responses substantially affected our reported questionnaire data. Among CNMI employer responses, two hotels accounted for more than half of workers employed by questionnaire respondents, so those hotels’ responses substantially affected our questionnaire results. To study CNMI hotel minimum wage and payroll costs in relation to operating costs, we analyzed data provided by CNMI tourism questionnaire respondents on 2009 annual payroll before deductions for taxes and benefits, Social Security and Medicare contributions under the Federal Insurance Contributions Act (FICA), payments for employee benefits, and other operating expenses. For this and other analyses in this report, we excluded nonwage labor costs due to the minimum wage increases, such as increases in employer payroll tax contributions under FICA. For 2011, employers must contribute the equivalent of 6.2 percent of employee wages to Social Security and 1.45 percent to Medicare, up to $106,800 in employee wages. We obtained 2009 SSA data (as of October 2010) on the earnings and employment of individual taxpayers in American Samoa and the CNMI, to update our analysis of data for 2005 to 2008 (as of August 2009) in the previous report. While the SSA data cover all types of workers in American Samoa and were sufficiently reliable for our purposes, three large groups of people in the CNMI were not required to report earnings to SSA and thus are excluded from the SSA data—CNMI government workers and immigrant workers from the Philippines and Korea. In 2008, these three groups represented approximately half of all CNMI workers, according to CNMI government tax data. We have chosen not to report the CNMI SSA data due to these coverage gaps. For American Samoa, SSA told us that all employees were subject to SSA withholding—no group was systematically excluded. In our prior report, we determined that the data were generally consistent with information from other sources, including local American Samoa W-2 data and our questionnaire results. We used SSA data to review trends in employment in American Samoa since the federal minimum wage increases were implemented, including to determine two aspects of employment of American Samoa workers from 2005 to 2009. First, we used SSA data to determine the level of employment. Our count of employed people was based on the number of people that had positive reported earnings to SSA. Second, we reported the average earnings per employed person in American Samoa (excluding those with zero earnings). In addition, we estimated the proportion of employed persons that dropped out of our sample in the following year. Because of data limitations, we were unable to report earnings that had not been reported to SSA, either because of a failure on the part of the employer or because the earnings were not subject to SSA withholding. We also were unable to report on earnings that exceeded the SSA withholding cap. In addition, because our data was as of October 2010, our sample did not include those individuals for whom W-2 records were entered into Social Security files created after October. To assess the reliability of the data, we interviewed agency officials at SSA. To the extent possible, we compared employment counts from the SSA data to counts from other sources. In addition, the counts of American Samoa employment and earnings differ from those in our prior report because of the inclusion of additional earnings from late W-2 filers. We updated our analysis to better isolate the earnings of individuals working in American Samoa. We determined that the available data were adequate and sufficiently reliable for the purposes of depicting trends in employment and earnings in American Samoa. We conducted structured discussion groups with Chamber of Commerce members in American Samoa and the CNMI to collect information on the impact of the minimum wage increases on employers. Employers represented a range of industries and sizes, and we determined that the most effective and least burdensome method of collecting information from smaller employers would be to conduct discussion groups. For each discussion group, the Chamber of Commerce invited members to participate. In the CNMI, we also held discussion groups with members of the Hotel Association of the Northern Mariana Islands and with hotel human resource managers. The number of participants in each group ranged from 7 to about 18 business owners or managers. To collect information on workers’ views of the minimum wage increases, we conducted structured discussion groups with various worker and community groups with different organizational affiliations. In each case, we asked the organizations’ leadership to invite members to the discussion groups. In American Samoa, we conducted two worker discussion groups at the remaining cannery, one group with recipients of the U.S. Department of Agriculture’s Women, Infants, and Children program, and one group with participants in a nonprofit organization providing job training to laid-off cannery workers. In the CNMI, we conducted one discussion group with U.S. Department of Agriculture Nutrition Assistance Program recipients and two discussion groups at the public library and publicized to employers, including members of the Hotel Association of the Northern Mariana Islands. The number of participants in each group ranged from 4 to 11. All discussion groups were moderated by a GAO employee following a structured guide with open-ended questions about the minimum wage increases and related topics. Discussion groups are generally designed to obtain in-depth information about specific issues that cannot be easily obtained from single interviews. Methodologically, they are not designed to provide results generalizable to a larger population or provide statistically representative samples or quantitative estimates. They represent the views only of the participants in our groups and may or may not be representative of the population of employers and workers in these insular areas. Therefore, the experiences of other employers and workers may be different from those who participated in our discussion groups. In addition, the groups and participants in the groups were not random samples of employers and workers in these insular areas. The U.S. Bureau of Labor Statistics collects CPI data on the U.S. 50 states but not the insular areas. Therefore, we relied on other sources of data to compare changes in earnings or wage rates to changes in prices. We analyzed American Samoa administrative and survey data, including CPI data. We analyzed CNMI administrative and survey data, including CPI data and CNMI data on the number and earnings of workers from the CNMI Department of Finance’s tax returns. The CNMI tax data provide ranges of earnings for both public and private sector workers and for both citizens and noncitizens in 2009, allowing us to update our analysis of data for 2005 to 2008. The CNMI tax data provide ranges of earnings, including all payments to employees such as overtime, shift differentials, cash housing and meal allowances, bonuses, etc. We obtained historical data on the CPI from both areas in order to estimate inflation-adjusted earnings. For both American Samoa and the CNMI, we used quarterly CPI data from the first quarter of 2006 to the fourth quarter of 2009. To produce an annual CPI series, we averaged the four quarters in each year. In addition, for American Samoa, because the CPI was rebased in the fourth quarter of 2007, we recalculated the quarterly index series from the fourth quarter of 2008 back to the fourth quarter of 2007 by finding a rebasing factor such that the old and new indexes in the fourth quarter of 2007 were identical. For both CNMI and American Samoa, we interviewed agency officials and contractors responsible for producing the quarterly CPI estimates. During our interviews and review, we noted irregularities in the CNMI CPI data. After we brought this to the responsible agency officials’ attention, they determined that there had been an error in the published CNMI CPI, and they provided to us a corrected index that did not exhibit the same irregularities. The revised CPI’s inflation rate was approximately four percentage points lower than the one previously published. The CNMI CPI data cover the island of Saipan; CPI data for 2006 to 2009 were not available for the islands of Tinian and Rota. We also analyzed industry data. For example, to determine hotel room prices and hotel occupancy rates in the CNMI, we collected data from the Hotel Association of the Northern Mariana Islands and conducted related interviews and correspondence. In addition, the Marianas Visitors Authority provided data on flight seats and arrivals by country of residence. We found the data used to be reliable and relevant for the purposes of our report. In general, to establish the reliability of the data that we used for reporting trends and statistics for both American Samoa and the CNMI, we systematically obtained information about the way in which data were collected and tabulated. When possible, we checked for consistency across data sources. While the data had some limitations, we determined that the available data were adequate and sufficiently reliable for the purposes of our review. Our review had certain limitations in addition to those already noted. In particular, although our approach yielded information on trends in employment, wages, and earnings in both areas, it is difficult to distinguish between the effects of minimum wage increases and the effects of other factors, including the global recession beginning in 2009, fluctuations in energy prices, global trade liberalization, and the application of U.S. immigration law to the CNMI. In addition, our review of minimum wage increases is limited to American Samoa and the CNMI, and we did not study minimum wage increases in the U.S. economy more broadly. We conducted our work from September 2010 to June 2011 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for the findings in this product. American Samoa comprises five volcanic islands and two coral atolls with a combined land area of 76 square miles—slightly larger than Washington, D.C.—located about 2,600 miles southwest of Hawaii (see fig. 1). In 2005, American Samoa had a population of about 63,780. Its capital, Pago Pago, is on the main island of Tutuila, which consists mostly of rugged terrain with relatively little level land; most economic activity and government operations in American Samoa take place in the Pago Pago Bay area. U.S. interest in the Samoan islands began in 1872 with the efforts of the U.S. Navy to establish a naval station in Pago Pago harbor. The protectorate over all the Samoan islands established by the United States, Britain, and Germany ended in 1899, when the islands composing American Samoa were placed under U.S. control. The U.S. Naval Station was established in 1900. From 1900 through 1904, the U.S. government negotiated control over American Samoa, and the U.S. Navy subsequently took responsibility for federal governance of the territory. In 1951, governance was transferred to the Secretary of the Interior. In 1960, American Samoa residents adopted their own constitution. Amendments to the constitution of American Samoa can only be made by the U.S. Congress. Persons born in American Samoa are U.S. nationals but may apply to become naturalized U.S. citizens. U.S. noncitizen nationals from American Samoa have the right to travel freely, live, and work throughout the United States. American Samoa exercises authority over its immigration system and customs through locally adopted laws. Additionally, the U.S. government has supported American Samoa’s economy through trade and tax policies that, respectively, have provided tariff-free access to the United States for tuna canned in American Samoa and have reduced federal taxes on income earned by qualifying U.S. corporations investing in American Samoa. However, changes to various free trade agreements within the last decade have lowered U.S. tariffs on tuna exported from several other countries, reducing the American Samoa canneries’ competitive advantage. U.S. tax policies, designed to encourage certain U.S. corporations to invest in the U.S. insular areas and create jobs, expired and were recently extended through 2011. Part of the Mariana Islands Archipelago in Micronesia, the CNMI is a chain of 14 islands in the western Pacific Ocean—just north of Guam and about 3,200 miles west of Hawaii (see fig. 5). Most of the CNMI population— 65,927 in 2005, with recent estimates of decline—resides on the island of Saipan, with additional residents on the islands of Tinian and Rota. The United States took control of the Northern Mariana Islands from Japan during the latter part of World War II, and after the war the U.S. Congress approved the Trusteeship Agreement making the United States responsible to the United Nations for the administration of the islands. Later, the Northern Mariana Islands sought self-government and permanent ties with the United States. In 1976, after almost 30 years as a trust territory, the District of the Mariana Islands entered into a covenant with the United States establishing the island territory’s status as a self- governing commonwealth in political union with the United States. This covenant grants the CNMI the right of self-governance over internal affairs and grants the United States complete responsibility and authority for matters relating to foreign affairs and defense affecting the CNMI. The covenant initially made many federal laws applicable to the CNMI, including laws that provide federal services and financial assistance programs. The covenant preserved the CNMI’s exemption from certain federal laws that had previously been inapplicable to the Trust Territory of the Pacific Islands, including federal immigration laws, with certain limited exceptions, and certain federal minimum wage provisions. However, under the terms of the covenant, the federal government has the right to apply federal law in these exempted areas without the consent of the CNMI government. Until recently, the CNMI retained legislative authority over most aspects of immigration, regulating entry into the CNMI through a permit system. In 2008, federal legislation amended the U.S.-CNMI Covenant to establish federal control of CNMI immigration; the law includes several provisions affecting access to the CNMI by foreign workers, tourists, and foreign investors that were implemented beginning in November 2009. As we reported in August 2008, the potential impact of the legislation’s implementation on the CNMI’s labor market will largely depend on decisions that the U.S. Departments of Homeland Security (DHS) and DOL make in implementing a program to provide foreign workers temporary permits to work in the CNMI. Although modest reductions in CNMI-only permits for foreign workers would cause minimal impact, any substantial and rapid decline in the availability of CNMI-only work permits for needed workers would have a negative effect on the economy, given foreign workers’ prominence in key CNMI industries. As of May 2011, DHS had not issued final regulations for workers, nor has it made a permanent decision regarding access for visitors from Russia and China. It issued regulations for foreign investors in December 2010. The CNMI government, excluding component units, spent approximately $64 million in grants from several federal agencies in fiscal year 2009, according to the most recent CNMI Report on the Audit of Financial Statements. The CNMI has received federal funds under the American Recovery and Reinvestment Act that temporarily supplement local government revenues. Excluding component units, local government spending exceeded revenues each year from 2005 to 2009 (see fig. 6). From 2005 to 2009, the CNMI government’s budget deficit averaged 15 percent of total revenues. In October 2010, the CNMI government partially shut down due to a budget impasse. The shutdown was in effect for 10 days, and the local government has since enacted austerity measures, including eliminating paid holidays for government employees and reducing full-time employees’ schedules by 16 hours per 2-week pay period. Since June 2010, the local government has also occasionally delayed payroll and is currently considering laying off employees. Additionally, the CNMI government has struggled to make payments to various local government departments and units. For example, the Northern Mariana Islands Retirement Fund actuarial assessment for fiscal year 2008 reported unfunded pension liabilities of about $530 million. In December 2010, the CNMI government owed unpaid utility bills to the Commonwealth Utilities Corporation, resulting in disconnected power and water service for some local government departments, and the Marianas Visitors Authority had unfunded liabilities of about $2 million. The Commonwealth Utilities Corporation has been operating under a state of emergency since August 2008. BEA estimated that the CNMI’s GDP in 2007 was $962 million. From 2002 to 2007, real GDP decreased at an estimated average annual rate of 4.2 percent. Per capita real GDP increased at an estimated average annual rate of 0.5 percent at least partly because of population loss. For years, the garment and tourism industries were the mainstay of the CNMI’s economy, generating employment and bringing revenue from outside the CNMI. For example, in 1999, these two industries accounted for about 85 percent of the CNMI’s total economic activity and 96 percent of its exports. Several developments in international trade caused the CNMI’s garment industry to decline dramatically. In January 2005, in accordance with a World Trade Organization 10-year phaseout agreement, the United States eliminated quotas on textile and apparel imports from other textile- producing countries, exposing the CNMI apparel industry’s shipments to the United States to greater competition. Subsequently, the value of CNMI textile exports to the United States dropped from a peak of $1.1 billion in 1998 to close to zero in 2010 (see fig. 7). The number of licensed CNMI apparel manufacturers decreased rapidly, from 34 firms in 1999 to 6 firms as of July 2008. By the end of the first quarter of 2009, the last garment factory in the CNMI had closed. In addition, the CNMI economy has been negatively affected by trends and uncertainty in the tourism industry—the CNMI’s primary private sector industry. For example, tourism in the CNMI declined after peaking in the mid 1990s, beginning with the Asian financial crisis in the late 1990s. In 2003, according to CNMI officials, tourism slowed for several months in reaction to the severe acute respiratory syndrome epidemic, which originated in Asia, and the war in Iraq. Total visitor arrivals to the CNMI dropped from a peak of 726,690 in 1997 to 368,186 in 2010, a decline of 49 percent, as shown in figure 8. Japan, Korea, Russia, and China are the CNMI’s primary visitor markets, with Japan representing the largest share of any country and Russia and China representing emerging markets. CNMI government officials reported declines in Japanese visitor arrivals following the March 2011 earthquake and tsunami in Japan, and they expressed concern about the impact on the CNMI’s tourism industry. The tourism industry also may be affected by the November 28, 2009, implementation of a joint visa waiver program for visitors to the CNMI and Guam, as part of the application of U.S. immigration law. While an interim final rule currently governs the operation of the Guam-CNMI Visa Waiver Program, DHS has not issued final regulations for this program. During the expansion of the CNMI garment and tourism industries prior to 1995, the CNMI economy became dependent on foreign labor, as the CNMI government used its authority over its own immigration policy to bring in large numbers of foreign workers and investors. In 1995, two-thirds of the CNMI working population were temporary residents, including about 93 percent of workers in the garment industry and slightly over 72 percent in the tourism industry. In contrast, in the same year, U.S. citizens and permanent residents of the CNMI held about 96 percent of jobs in the public sector. As a result, the CNMI economy developed a two-tiered wage structure, with U.S. citizens and permanent residents earning 3.5 times more in 1995 than temporary residents. However, with the decline of the garment and tourism industries, the number and proportion of noncitizens in the CNMI labor force and population have decreased (see fig. 9). In 2005, noncitizen workers in the CNMI were predominantly from China or the Philippines. As noted above, the application of U.S. immigration law might result in further changes in the composition of the CNMI’s workforce. In addition, the CNMI’s economy may be affected in the future by the planned build-up of the U.S. military in neighboring Guam. By 2014, the U.S. Department of Defense intends to relocate 8,600 Marines and additional military units, as well as an estimated 9,000 dependents from Okinawa, Japan, to Guam, increasing Guam’s current population by an estimated 25,000 active duty military personnel and dependents. The Department of Defense plans to use the island of Tinian to conduct training operations and construct firing ranges. Local government officials expect the build-up to have a small positive impact on the CNMI in the immediate future, with potentially greater positive impacts in the medium- to long-term. However, while it is possible that the build-up will result in new businesses and tourism opportunities for the CNMI, some local private sector officials anticipate that the build-up will have little to no economic benefit for the commonwealth. In an effort to explore opportunities for future economic development, the local government has identified potential growth industries, including call centers, agriculture, and aquaculture. The CNMI’s Comprehensive Economic Development Strategic Plan, 2009-2014 outlines a number of developmental projects in various industries. However, the plan indicates that many challenges exist to implementing these projects and notes that, despite past studies and efforts to identify new industries, the CNMI has had difficulty attracting new investors and developing new industries. Personal Income and Poverty Rates Current federal data on income and poverty levels in the CNMI do not exist; however, the most recent available data show that the CNMI had lower income and higher poverty rates than the mainland United States. For example: In 2004, the CNMI median household income was $17,138, while the U.S. 50-state and District of Columbia median household income was $44,334. In 2004, the CNMI poverty rate for all persons was 53.5 percent, while the U.S. 50-state and District of Columbia poverty rate for all persons was 12.7 percent. The federal minimum wage was first enacted as part of the Fair Labor Standards Act of 1938 (FLSA). That first federally mandated minimum wage had repercussions in the U.S. Virgin Islands and Puerto Rico that led the United States, in 1940, to revise the application of the law in those territories; the overarching goal of the FLSA continued to be pursued there, but at a slower pace than in the U.S. 50 states. As of July 2009, the federal minimum wage was set at $7.25 per hour. Federal minimum wage laws apply generally to any employee engaged in commerce, with limited exceptions and exemptions. Certain employees who would otherwise be covered under the FLSA definitions are exempted by law from the minimum wage requirements—for example, employees involved with seafood at sea are exempt. Employees not covered by FLSA include, for example, individuals engaged in agriculture, if the employer is an immediate family member. DOL’s Wage and Hour division enforces a variety of U.S. labor laws, including laws related to minimum wage, overtime pay, child labor, and family medical leave. The division uses a number of enforcement strategies, including investigations and partnerships with external groups, such as states, foreign consulates, and employee and employer organizations. From the passage of the FLSA in 1956 to 2007, employers in American Samoa were allowed to pay their employees at hourly rates less than the federal minimum wage. During that period, rates were set by special industry committees established by the U.S. DOL, through biennial reviews conducted with the participation of island stakeholders that included representatives of government, key industries, and workers. The special industry committees system continued to exist until May 2007, when Congress required an incremental increase in the minimum wage for all industries in American Samoa, at $.50 per year in each industry, until it reaches the full federal minimum wage. In 2010, the U.S. enacted a law delaying the scheduled minimum wage increases for 2 years, providing for no increase in 2010 or 2011. For example, if the current federal minimum wage of $7.25 remains unchanged, the minimum wage for American Samoa tuna canning industry workers will reach $7.25 in 2016 (see table 4). Under the terms of the CNMI-U.S. covenant, the CNMI was exempt from the minimum wage provisions of the FLSA and maintained control over its own minimum wage system. Legislative changes to the federal minimum wage in 2007 specified that the CNMI would be subject to the federal minimum wage, through a staged $.50 incremental approach. The law raised the CNMI minimum wage from $3.05 to $3.55 per hour in July 2007 and required a $.50 increase every year thereafter until the FLSA-CNMI minimum wage equals the full federal minimum wage. In 2010, the U.S. enacted a law delaying the scheduled minimum wage increase for 1 year, providing for no increase in 2011 (see table 5). The federal government has conducted or funded several reports on minimum wage increases in American Samoa and the CNMI in recent years. In May 2007, DOL’s Wage and Hour Division issued a report on the minimum wage in American Samoa as part of DOL’s biennial review process under the special industry committees. The report analyzes American Samoa’s wage and employment structure based on a 2006 employment and wage survey, and it provides the numbers of employees in each industry who would be affected by a range of possible minimum wage increases. The report stated that the average hourly wage in the fish canning industry was $3.60 and in the American Samoa government was $7.75. It found that 50 percent of American Samoa workers were paid less than $4 per hour. In January 2008, DOL issued a report on the economic impact of minimum wage increases in both American Samoa and the CNMI, as required by a 2007 law. For American Samoa, the study noted concern that the tuna canneries would close before the minimum wage reached the U.S. federal minimum wage of $7.25 per hour, causing substantial job losses. The report stated that over three-quarters of American Samoa workers earned under $7.25 per hour and that if the U.S. minimum wage were increased to the level of the 75th percentile of hourly-paid U.S. workers, it would be raised to $16.50 per hour. For the CNMI, the study found that although data were not available to precisely quantify the impact of the scheduled minimum wage increases, it seemed likely that the CNMI’s existing economic decline would be made worse and that the CNMI population would continue to decline. DOI-funded studies of the American Samoa and CNMI economies, including the minimum wage increases. A February 2008 study assessing the relationships between different sectors of the American Samoa economy found that a doubling of American Samoa’s minimum wage in a 7-year period could result in the end of the fish processing industry, which represented approximately one-half of American Samoa’s economic base, and serious consequences for the economy. The authors predicted that costs would rise due to minimum wage increases in other industries, and transportation, energy, and utility costs would increase because the canneries would no longer be available to share those costs. They found that, under a worst-case scenario, American Samoa could lose 46 percent of all jobs in the territory. In this scenario, rising minimum wages would cause a complete closure of American Samoa’s tuna canneries. A long recovery period would follow, with high unemployment rates, business closures or cutbacks, and declines in local revenue collection. They found that local government would be unable to adequately address the situation, requiring outside assistance. An October 2008 study of the CNMI economy examined the impact of both federal immigration policy and the minimum wage increases. In framing this analysis, the study found that lifting of quotas on garment imports to the United States had rendered the CNMI’s garment industry unfeasible and estimated that the loss of 16,800 garment jobs could ultimately cost the CNMI economy about 25,200 jobs, about 60 percent of peak employment in 2004. The study projects the combined effect of the closure of the garment industry with the implementation of the federal minimum wage and an application of federal immigration policy, whereby almost the entire foreign workforce is removed from the CNMI economy. In this projection, the employment of U.S.-qualified residents increases by 21 percent from 2005 to 2015, but real wages and salaries of U.S.-qualified residents fall by 19 percent. In addition, immigration-policy changes quickly remove foreign workers on government-approved contracts from the economy, and U.S.- qualified residents take jobs in the tourism industry. Despite the increased minimum wage, most of the jobs are projected to pay lower wages than U.S.-qualified residents had come to expect. The study also provides an alternative projection under which the minimum wage is held at $4.05, foreign labor is not restricted, and an aggressive promotion program successfully doubles visitor arrivals by 2015. In this projection, the employment of U.S.-qualified residents increases by 4 percent from 2005 to 2015, and real wages and salaries of U.S.-qualified residents increase by 15 percent. The authors suggested, among other recommendations, that the law extending the minimum wage requires further analysis and notes that officials are seeking to modify the scheduled increases. Possible modifications discussed include lengthening the period over which the minimum wage is increased, basing increases on measures of worker productivity, or using a special program for adjustment as had previously been done in American Samoa. In American Samoa, SSA data show that total employment fell 19 percent from 2008 to 2009 and fell 14 percent from 2006 to 2009, though it increased in some years. Data from 2010 on total employment are not yet available. Questionnaire responses show that tuna canning employment dropped by 55 percent from 2009 to 2010, reflecting the September 2009 closure of one cannery and layoffs in the remaining cannery. In addition, we estimated that from 2,000 to 3,000 temporary federal jobs funded beginning in June 2009 will end when federal funding is no longer available. Average inflation-adjusted earnings in American Samoa fell by 5 percent from 2008 to 2009 and by 11 percent from 2006 to 2009. However, over both periods, the minimum wage increased by significantly more than inflation. Private sector officials said the minimum wage was one of a number of factors making it difficult to do business, and public officials said they supported returning to biennial reviews of minimum wages or other alternatives to the scheduled increases. In the tuna canning industry, without a minimum wage increase in American Samoa in 2010, there was no increase in the median wage of tuna canning workers, which was $4.76 in both 2009 and 2010. The most recent minimum wage increase in May 2009 affected the wages of 69 percent of hourly-wage cannery workers. Future minimum wage increases would affect the wages of 99 percent of current cannery workers. The two canning industry employers included in our questionnaire reported taking cost-cutting actions from June 2009 to June 2010, including laying off workers, reducing overtime hours, freezing hiring, and decreasing benefits, as well as raising prices. The employers reported plans to continue taking cost-cutting actions in 2011. The employers attributed most of their past and planned actions largely to the minimum wage increases. Cannery officials we interviewed expressed concern about American Samoa’s dwindling competitive advantage in the global tuna canning industry. Though the cannery faces some near-term obstacles to relocating, our analysis suggests that relocating tuna cannery operations from American Samoa to a tariff-free country with lower labor costs would significantly reduce cannery operating costs and reduce American Samoa jobs; however, maintaining some operations in American Samoa would allow the facility to continue to compete for U.S. government contracts. Some workers said they were disappointed to see the minimum wage increase delayed in 2010 and 2011; however, more workers expressed concern over job security than favored a minimum wage increase with the potential for subsequent layoffs. Overall American Samoa employment declined from 2006 to 2009, based on Social Security data; however, employment increased in some years. As shown in figure 10, employment grew from 2006 to 2008 but then fell in 2009 to a level lower than in 2006. Specifically, available data show that from 2008 to 2009, the total number of people employed in American Samoa fell by 19 percent (from 19,171 to 15,434) and that over the entire period from 2006 to 2009, employment fell by 14 percent (from 17,852 to 15,434, with a peak of 19,171 in 2008). Because Social Security data are not available for 2010, we are unable to report on the overall level of employment for the year. However, the cannery that closed in September 2009 employed approximately 2,000 workers, and there were layoffs in the remaining cannery—most of these losses do not appear in the 2009 SSA data. For the tuna canning industry, questionnaire responses from the remaining cannery and a closely related business show that employment of their workers—most of whom are foreign workers from independent Samoa—dropped by 55 percent from 2009 to 2010 (from 4,125 to 1,869) and dropped by 59 percent for the entire period from 2007 to 2010 (from 4,593 to 1,869). In addition, we estimated that from 2,000 to 3,000 jobs funded by the U.S. Census Bureau, the Recovery Act, and recovery efforts after the 2009 tsunami were temporary and will end when federal funding is no longer available. As a result, counts of the total number employed during this period will be higher than the number of long-term positions. The temporary jobs were funded beginning in June 2009, and the great majority of those were disaster recovery positions related to the tsunami. In addition, the Census Bureau employed enumerators and managers to assist with collection of 2010 Decennial Census data, and Recovery Act funds supported workers on infrastructure projects and in other fields. In discussion groups, private sector employers generally opposed additional minimum wage increases but said that a number of other factors made it difficult to do business in American Samoa. For example, they said increases in prices of utilities, shipping, and raw materials; an outdated tax structure; low levels of investment; and business licensing problems also make it difficult to establish and do business in American Samoa. They said that federal tsunami recovery assistance, Recovery Act funds, and Decennial Census employment provided relief and increased business, particularly in the construction industry, temporarily obscuring the full force of American Samoa’s economic downturn. As this temporary relief period ends, employers expect that American Samoa’s economic situation will worsen. They also said that, with fewer tuna exports, they expect increases in shipping wait-times and costs. Many employers supported the 2010 and 2011 delays in the minimum wage increases, and some said it is important to use the 2-year delay to address business challenges. For example, they said that while it is difficult to develop tourism in American Samoa, it is important to try. They also said they were concerned about the fiscal status of the local government and the possibility of harmful tax increases. An American Samoa government analysis found that the minimum wage increases would raise the government’s wage and salary costs for its employees by about 1 percent (about $9 million) over 7 years. Public officials said they supported a return to biennial reviews of minimum wage in American Samoa or other alternatives to the scheduled increases. In January 2011, the American Samoa governor signed a letter saying that the federal minimum wage increases had had devastating effects on American Samoa employment and the economy. The letter urged consideration of alternative methods of determining minimum wages in American Samoa, including the previous DOL biennial review process or some modification, or amending laws to specify the conditions to be considered in determining the minimum wage. Average earnings of workers who maintained employment rose from 2006 to 2009, but available data show that the increase was not sufficient to overcome the increase in prices. As shown in figure 11, based on SSA and consumer price data, from 2008 to 2009 (the most recent year available), average inflation-adjusted earnings fell by 5 percent. This decline resulted from a decrease in average earnings of 2 percent, and an increase in prices of 3 percent. For the period from 2006 to 2009, average inflation-adjusted earnings fell by 11 percent, due to a rise in average annual earnings of about 5 percent with an 18 percent increase in prices. While Social Security earnings data do not allow for a direct comparison of average and minimum wage annual earnings or for tracking the earnings of workers who lost their jobs or left the area, the hourly wage of minimum wage workers increased by more than inflation. The inflation- adjusted earnings of minimum wage cannery workers who retained their jobs and work hours rose by about 8 percent from 2008 to 2009 and by about 23 percent for the entire period from 2006 to 2009. Minimum Wage Increases in 2007-2010 Increased Median Wage for Tuna Canning Industry Employees Without a minimum wage increase in American Samoa in 2010, there was no increase in the median wage of workers in the tuna canning industry— in both 2009 and 2010, the median tuna canning worker wage was $4.76. Consistent with our last report, the median hourly wage rose from $3.30 in June 2007 to $4.76 in June 2010, a 44 percent increase, according to tuna canning questionnaire responses (see table 6). During this period, the minimum wage for canning workers increased three times from $3.26 to $4.76, an overall increase of 46 percent. Minimum Wage Increases in 2007-2010 Slightly Narrowed the Wage Gap between Lower- and Higher-Paid Workers Employed by Questionnaire Respondents in Tuna Canning Industry Responses to our questionnaire indicate that the minimum wage increases narrowed the gap between the wages of lower- and higher-paid workers in American Samoa’s tuna canning industry (see fig. 12). Specifically, the gap between the wages of the lowest- and highest- paid tuna canning workers was $0.28 in June 2007 and $0.25 in June 2010, a small decline of 11 percent. Minimum Wage Increases in 2010-2018 Would Affect Wages of Almost All Workers in Tuna Canning Industry As the minimum wage increases continue, they will affect a growing percentage of workers in American Samoa’s tuna canning industry. Based on questionnaire responses about workers’ wages as of June 2010, 69 percent of canning industry workers were at the minimum wage in 2009 and 2010. The future minimum wage increases would affect the wages of 99 percent of current canning industry workers by the time the minimum wage reaches $7.25 in 2016. By 2016, the extra annual cost added by minimum wage increases after June 2010 (reflecting the 2009 increase) would be $4,660 per worker (see table 7). We identified the additional cost by calculating the difference between the cost per worker in June 2010 and the cost per worker through 2016, based on the scheduled minimum wage increases and averaged across all workers. Tuna Canning Employers Reported Cutting Costs and Laying Off Workers from 2009 to 2010, with Most Actions Attributed to Minimum Wage Increases The two employers in the tuna canning industry reported in our questionnaire that they had taken cost-cutting actions from June 2009 to June 2010. For example, the two respondents reported having taken cost- cutting actions affecting workers’ income, including laying off hourly and salaried workers, reducing overtime hours for hourly workers, freezing hiring, and temporarily closing. The employer representing the majority of workers employed by questionnaire respondents also reported having decreased hourly workers’ benefits and reduced regular work hours for hourly workers. The two employers reported additional cost-cutting actions, including reducing operating capacity or services offered and implementing other cost- and labor-saving strategies or technology. The employer representing the majority of workers employed by questionnaire respondents also reported having delayed expansions. Both employers reported that they had raised prices of goods or services. For most of these actions, employers attributed their actions largely to the minimum wage increases. Tuna Canning Employers Reported Plans to Take Additional Actions by Early 2012, with Most Planned Actions Attributed to Minimum Wage Increases The two questionnaire respondents in the tuna canning industry also reported plans to take the same types of cost-cutting actions in the next 18 months, by early 2012. They reported planning to take cost-cutting actions affecting workers’ income, including laying off additional hourly and salaried workers and freezing hiring. The employer representing the majority of workers employed by questionnaire respondents reported planning to decrease benefits of both hourly and salaried workers and reduce regular and overtime hours. The two employers also reported plans to take additional cost-cutting actions, including implementing other cost- and labor-saving strategies or technology. The employer representing the majority of workers employed by questionnaire respondents reported planning to delay expansions and reduce operating capacity or services offered. The employer representing fewer workers employed by questionnaire respondents reported planning to raise prices. Employers attributed most of these plans largely to the minimum wage increases. Employers in American Samoa’s tuna canning industry reported that any actions by the larger employer will affect the smaller employer. Tuna Canning Employers Attributed Their Actions to Minimum Wage Increases More Often than to Other Factors Employers attributed their actions largely to the minimum wage increases more often than they attributed their actions largely to other factors, such as transportation and shipping costs and changes in business taxes and fees. However, they said a decrease in the number of customers, such as wholesale customers, was another important factor affecting their past and planned actions. Employers also attributed their past and planned actions to increased utility costs to a moderate extent. Cannery Officials Are Concerned about American Samoa’s Dwindling Competitive Advantage in Global Tuna Canning Industry Cannery company officials we interviewed indicated that labor costs, including the minimum wage increases, continued to place American Samoa at a significant cost disadvantage compared with other canned tuna exporting countries. As we previously reported, by raising the hourly minimum wage for cannery workers in American Samoa from $3.26 in 2006 to $4.76 in May 2009 (remaining at $4.76 in 2010 and 2011)—a total increase of 46 percent—the three minimum wage increases to date have further widened the gap between American Samoa and production sites with lower labor costs, such as Thailand, which has a minimum wage of less than $1 an hour. Cannery officials continued to state that wage increases were one of many factors affecting the tuna canning industry in American Samoa. Officials from the remaining cannery said that in previous years uncertainty regarding the minimum wage increases meant they could not plan American Samoa operations further than months in advance, impacting their ability to make a long-term commitment to maintaining operations in American Samoa. They said that although they continue to consider relocation or closure of the American Samoa facility as one of many possible scenarios, knowing that wages would be stable through 2012 had allowed them to better stabilize operations in American Samoa. In addition to higher wages, company officials noted that the continued increases in shipping and utility rates—partly owing to increased fuel costs in recent years—add to increased operating costs. Loss of eligibility for certain U.S. tax benefits also contributed to rising costs. Furthermore, as a result of the September 2009 cannery closure, the remaining cannery has since been responsible for all maintenance costs—such as waste disposal and water discharge—that the two canneries previously shared, as well as increased power and water costs. Opportunities for shared services between the remaining cannery and the newly acquired facility will depend on the scope of operation at the new facility, which remains unknown. Officials at the remaining cannery noted that, while duty-free access to the U.S. market for canned tuna exports from American Samoa once made production in American Samoa advantageous, trade liberalization has since significantly reduced tariff advantages. Additionally, cheaper operating costs in alternative locations expand the cost gap between canned tuna produced in American Samoa and canned tuna produced elsewhere. As a result of the factors discussed, representatives from the remaining cannery report that they have shifted a portion of production to facilities outside of American Samoa and continue to report that it is no longer cost-effective to operate a canning facility in American Samoa. As we previously reported, cannery officials stated that minimum wage increases were a significant factor in the closure of one of the two canneries in American Samoa but that other factors also contributed to the cannery’s closure. In addition to those mentioned above, cannery officials said that factors that contributed to the cannery’s closing included an attractive environment for investment in alternative locations and the high costs associated with environmental regulations. Although a new tuna facility operator acquired the facility that closed, operations planned in the short-term are more limited than those before the facility closed. Company officials indicated that they are considering using the plant as a logistics and storage facility for handling fresh, and potentially frozen, fish and for the company’s existing fleet in the Western and Central Pacific Ocean. These operations would require between 50 to 100 employees. The company will continue to evaluate and reconstruct the facility and has hired a small number of workers who had remained employed at the facility after its closure. As of March 2011, the company expected plant renovations to last 12 to 18 months, though some limited operations may begin before renovations are complete. However, company officials stated that all future employment and investment plans will depend on several factors, most important of which are the scheduled minimum wage increases. Specifically, officials said the opportunity to produce canned tuna could depend on American Samoa’s labor cost relative to alternate locations. Industry Experts Said Prices and Other Factors Are a Constraint as Tuna Canning Industry Becomes More Competitive In addition to factors affecting American Samoa operations in particular, industry experts noted that the global tuna industry is changing in many ways. For example, various fishery management organizations and other parties have increased restrictions on fishing some tuna target species, including tuna used for canning, in the western and central Pacific Ocean. Additionally, experts and industry officials said price dynamics are a major constraint to the industry; as the industry becomes increasingly competitive, profit margins decrease. The highly competitive global market for tuna products makes it increasingly difficult to pass along higher labor and operating costs to consumers by raising prices. For example, industry officials note that it is difficult for companies to raise prices when supermarket brands offer consumers very low prices. Growing supermarket and consumer demand for assurances of social and environmental responsibility also contributes to changing industry dynamics. Comparison of Four Tuna Canning Business Models Although American Samoa is located near rich fishing grounds, its labor costs are significantly higher than those in competing countries, both before and after the minimum wage increases. Cannery officials said that current operations in American Samoa were not competitive with other models. We compared the labor and tariff costs associated with alternate business models for tuna canning in order to illustrate how the costs differ under each estimated model. The following analysis provides cost estimates under four possible scenarios for cannery operations currently located in American Samoa, assuming constant total production under each model, and including two models presented in our previous report. It considers only labor costs and tariffs, in order to show the effect of variation in different countries. It excludes other associated costs, including transportation, refrigeration, opening of a new plant (if needed), and other costs associated with establishing multiple production locations. It also excludes nonwage labor costs, such as the costs of employer payroll tax contributions. Model A (loining and canning located in American Samoa): Tuna processing currently done in American Samoa remains entirely in American Samoa. Canneries located in American Samoa hire local and foreign workers to loin (clean, cook, and cut) and can the fish. In addition, the plant processes some frozen loins imported from countries with lower wages. The canned tuna from American Samoa is exported directly to the United States and benefits from tariff-free access to the U.S. market. With an estimated workforce of 1,500 employees in American Samoa, the associated labor cost is $14.9 million in 2010 and $23.4 million in 2016, with zero tariff costs. Model B (relocating loining to Thailand or another country with lower labor costs, and canning frozen loins in the U.S. 50 states): The loining operation—the most labor-intensive part of the operation—moves to low labor-cost countries, such as Thailand, Trinidad, Fiji, Mauritius, or Papua New Guinea, where the fish loin is frozen. The frozen fish is exported to the United States, where it is canned. The frozen fish carries a tariff of $11 per metric ton, and workers are employed in a low labor-cost country at $0.75 per hour. The combined labor and tariff cost in 2010 and 2016 of this model is $11.4 million. No workers remain in American Samoa cleaning fish, and 300 workers are employed in the U.S. 50 states at $14.00 per hour. Tuna facilities in American Samoa are currently among few in the United States that can meet the requirements of U.S. government contracts, many of which require U.S.-sourced and -processed fish. While facilities outside American Samoa may qualify for these contracts based on their location, it is unclear whether their production models meet the requirements, according to an industry expert. Facilities under this model might not meet the requirements of U.S. government contracts and could lose this business. Model C (relocating all loining and canning to a tariff-free country): Loining is done in a country with zero tariffs on canned tuna exported to the United States. Workers are employed at $1 per hour. The basis for tariff-free access to the United States—the Generalized System of Preferences—expired at the end of 2010; however, the Office of the U.S. Trade Representative is supporting reauthorization in 2011. Under this model, the American Samoa cannery closes, and all 1,500 positions are relocated to a tariff-free country. The cost is $3.1 million for 2010 and 2016, assuming no wage increases in the tariff-free country. As with Model B, facilities under this model might not meet the requirements of U.S. government contracts and could lose this business. Model D (hybrid, with one half of production, including for U.S. government contracts, located in American Samoa and the other half relocated to a tariff-free country): The American Samoa cannery continues to supply canned tuna for U.S. government contracts (20 percent of production from Model A), and another 30 percent of production remains in American Samoa. The remaining 50 percent of production moves to a country that exports canned tuna tariff-free to the United States. For this model, we assume that the workforce remaining in American Samoa will be 50 percent of the current total workforce, and the other 50 percent will be in a tariff-free country. The associated cost is $8.6 million in 2010 and $12.9 million in 2016, with zero tariff costs. Considering only labor and tariff costs, figure 13 shows that a business model in which all loins are processed in American Samoa (Model A) has higher costs than the alternatives. The model that presents the highest combined labor and tariff cost savings involves moving operations to a tariff-free country and closing operations in American Samoa (Model C). This model would result in approximately 1,500 fewer jobs in American Samoa. The next cost-saving option is to move 50 percent of production to a tariff-free country and keep 50 percent in American Samoa (Model D), while retaining eligibility for U.S. government contracts. This model would result in about 750 fewer jobs in American Samoa. Moving the loining operations to a country with lower wages (Model B) presents significant cost savings; however, under this scenario tariffs on imported frozen loins are imposed, and the canning process is done in the U.S. 50 states at higher wages than in competing tuna processing countries. Additionally, lease obligations in American Samoa and the cost of building new facilities may pose obstacles to near-term relocation. While cannery company officials and industry experts continue to report that American Samoa’s competitive advantage in the global tuna canning industry is decreasing, they have also stated that the ability to qualify for U.S. government contracts is one of the few remaining factors making American Samoa an attractive location for tuna canning. Although the comparison of labor and tariff costs under different business models shows the greatest savings by moving operations to a tariff-free country and closing operations in American Samoa (Model C), operations under this model would lose eligibility for U.S. government contracts for canned tuna. In addition, savings from moving the loining operations to a country with lower wages (Model B) also would be partially offset by the loss of U.S. government contracts. The model moving 50 percent of production to a tariff-free country and keeping 50 percent in American Samoa (Model D) would retain eligibility for these contracts. In discussion groups, most participants working in the tuna canning industry said they opposed further minimum wage increases. However, some participants supported the increases, especially to help with cost-of- living increases. Job insecurity. More tuna canning workers expressed concern over job security than favored a minimum wage increase with the potential for subsequent layoffs. Many workers said that their current wages are enough and that they prefer to remain at the current wage and keep their jobs. In addition, participants said that they fear the remaining cannery will close with more minimum wage increases, causing more job loss. Minimum wage increase delays. Many participants supported the delays to the 2010 and 2011 minimum wage increases. However, some said they had looked forward to the 2010 minimum wage increase and were disappointed to not receive an increase after they had expected it. A few said they were tired of the process of considering the minimum wage increases. Some supported waiting until 2012 to make a decision about future increases. Cost of living. Many participants said that the cost of living is increasing substantially, including the prices of bus fare, food, water, electricity, and health care. Some of these workers said that the cost of living increases as the minimum wage increases. Cannery closure. Participants are concerned about the spillover effects of cannery closures and layoffs on the rest of the American Samoa economy. They said that the economy and other businesses rely on the tuna canning industry and will suffer without canneries. Participants noted that there is high unemployment in American Samoa and that they fear additional unemployment. Reduced benefits and work hours. Participants reported that their benefits had been reduced, including paid holidays and vacations. In addition to reductions in benefits, participants are concerned that future wage increases will mean a reduction in hours. Foreign workers. Discussion group participants noted that workers from independent Samoa have fewer options for jobs and benefits. They said that some who were laid off have stayed in American Samoa and others have returned home. Discussion group participants outside the tuna canning industry shared mixed views on the minimum wage increases. Laid off workers said they supported the minimum wage increases more than employed cannery workers, though some were concerned about job loss and availability. Like discussion group participants in the tuna canning industry, participants outside the cannery fear that the remaining cannery will close with more minimum wage increases and that other companies will not invest in American Samoa. Participants said it is hard to find jobs and that American Samoa needs new jobs. In addition to noting that the cost of living is increasing, participants also said they thought that enrollment in social services is increasing. Participants said that people leave American Samoa in difficult times, but many return. The text box lists some of the comments by discussion group participants. American Samoa Workers’ Views Based on Discussion Groups “I’m scared of the wage increases because I might lose my job again.” “It’s better to have something than nothing, better to have a job than none. What’s the point of a minimum wage increase if you lose your job?” “What we have now is enough. Add 50 cents and we lose our jobs or the company closes. I don’t want to lose my job.” “It’s a very good idea for this island and us people to stop minimum wage for this year and next year.” “It’s disappointing to think you’re going to get an increase and then not get it.” “As long as I have a job, I don’t mind the delay. I’ll wait until 2012.” “The cost of food is sky high, and water and electricity is high also.” “As increases in wages come, so do price increases in everything— food, power.” “Minimum wage is a problem—it is too high, and companies are moving out.” “I think minimum wage is the reason companies are failing.” “The economy here depends on the cannery. Without it, the economy falls apart.” Reduced benefits and work hours “There has been cost-cutting. They got rid of benefits. There’s no annual leave or vacations.” “If we have another 50 cent increase, hours are reduced—no more eight hours a day, it’ll be six hours a day. So if the rate goes up, it doesn’t matter. “If there’s no job for me, because I’m from Western Samoa, where can I find work and money for my family?” “Those that lost their jobs after closed are staying at home, doing nothing, went back home , or they’re on social services.” CNMI employment fell by about 13 percent from 2008 to 2009 and by about 35 percent from 2006 to 2009, largely reflecting the closure of the CNMI’s last remaining garment factories. In addition, we estimate that less than 1,000 temporary federal jobs funded beginning in June 2009 will end when federal funding is no longer available. Inflation-adjusted average earnings of CNMI workers who maintained employment rose by 3 percent from 2008 to 2009 and remained largely unchanged, with a slight drop of .5 percent, from 2006 to 2009, according to CNMI government data. In addition, over both periods, the minimum wage increased by significantly more than inflation. In discussion groups, private sector employers said minimum wage increases imposed additional costs during a time in which multiple factors made it difficult to operate. According to CNMI government payroll data, about 17 percent of government workers are paid at or below $7.25 and would be affected by the minimum wage increases by 2016. In the tourism industry, close to three-quarters of hourly-wage workers in June 2010 were at the current minimum wage, and future scheduled increases through 2016 would affect 95 percent of those workers. Tourism questionnaire employers reported that they took cost- cutting actions from June 2009 to June 2010, including reducing hours and freezing hiring; employers also reported plans to take the same types of actions by early 2012, as well as laying off workers. Few employers— weighted by numbers of employees—attributed their past cost-cutting actions largely to the minimum wage increases, and one-half or less did so for each of the planned actions. Due to the decline in visitors and to competition from other destinations, hotels have generally absorbed minimum wage costs rather than raise room rates, and they have postponed other investments and renovations. Both visitor arrivals and flight seats available to the CNMI declined from 2005 to 2010. Industry data show that from 2006 to 2010 the CNMI hotel occupancy rate remained between 58 and 65 percent, and inflation-adjusted room rates declined. If observed trends continue, payroll will represent an increasing share of total operating cost for hotels in the CNMI, due to the minimum wage increases. In discussion groups, some tourism employers and managers expressed concern about the minimum wage increases, but others said the minimum wage increases were needed and manageable and that the primary difficulty was the CNMI tourism industry’s general decline. Workers participating in our CNMI discussion groups expressed mixed views regarding the minimum wage increases and said they would like pay increases but were concerned about losing jobs and work hours. Overall CNMI employment fell substantially from 2006 to 2009, with drops in the numbers employed in every year, based on CNMI tax data. As shown in figure 14, based on CNMI tax data, from 2008 through 2009 the total number of people employed fell by about 13 percent. For the entire period from 2006 through 2009, the number employed fell 35 percent. A large part of this decline, especially early in this period, is likely attributable to the closure of the CNMI’s last remaining garment factories, which employed many foreign workers. Because CNMI tax data are not available for 2010, we are unable to report on the overall level of employment for the year. Wage data from the 12 respondents to our 2010 tourism questionnaire show that hourly-wage employment in the tourism industry (including hotels and other employers, such as tour operators) fell 8 percent from 2009 to 2010 (from 1,703 to 1,567) and fell by 14 percent over the period from 2007 to 2010 (from 1,827 to 1,567). In addition, we estimated that less than 1,000 jobs funded by the U.S. Census Bureau and Recovery Act funds were temporary and will end when federal funding is no longer available. As a result, counts of the total number employed during this period will be higher than the number of long-term positions. The temporary jobs were funded beginning in June 2009 and included Census enumerators and managers to assist with collection of 2010 Decennial Census data, as well as jobs in infrastructure and other areas supported by Recovery Act funds. In interviews and discussion groups, private sector employers reported declines in employment due to layoffs and hiring freezes, as well as cuts in hours and benefits. Many discussion group participants said the minimum wage increases were one of multiple factors in a “perfect storm” that made it difficult to operate businesses in the CNMI. They expressed concern about increases in crime and in poverty, including people without water and power. They said the departure of the garment industry and the inability to replace the industry had initiated a downward economic spiral that hurt businesses, including by contributing to higher shipping costs and reduced flights. The tourism industry has declined, and population loss from people leaving the CNMI also has resulted in decreased sales. In addition, businesses faced high and increasing costs of inputs, including power and other utilities, gas, and food. They said the legitimate economy was shrinking, while the underground economy—including some employers that do not pay the minimum wage—was growing. Private sector employers expressed particular concerns about changes to immigration law and incomplete regulations, which created uncertainty regarding access to needed foreign workers and to visitors. In general, they said minimum wage increases imposed additional costs at a particularly difficult time for CNMI businesses. They also expressed concerns about instability and possible tax increases from the local government, and some said that the federal government had made insufficient efforts to improve living conditions and to collect and monitor data on the CNMI. According to CNMI government payroll data, about 17 percent of government workers are paid at or below $7.25 and would be affected by the minimum wage increases by 2016. In addition, after a partial government shutdown in October 2010, the CNMI government made significant cuts to government employees’ work hours. Average earnings for those who maintained employment rose from 2006 to 2009, but prices increased by about the same amount. As shown in figure 15, based on CNMI tax and consumer price data, from 2008 to 2009 (the most recent year available) average inflation-adjusted earnings rose by 3 percent. This increase resulted from an increase in average earnings of 7 percent and an increase in prices of 3.5 percent. For the period from 2006 to 2009, average inflation-adjusted earnings remained largely unchanged, with a slight drop of about 0.5 percent, due to a rise in average annual earnings of about 19 percent and a 19.5 percent increase in prices. Although CNMI tax data do not allow for a direct comparison of average and minimum-wage annual earnings or for tracking the earnings of workers who lost their jobs, the hourly wage of minimum wage workers increased by more than inflation. The inflation-adjusted earnings of minimum wage workers who retained their jobs and work hours rose by about 9 percent from 2008 to 2009 and by about 25 percent for the entire period from 2006 to 2009. Minimum Wage Increases in 2007-2010 Increased Median Wage for Tourism Industry Employees From June 2007 to June 2010, the median hourly wage in the CNMI tourism industry rose from $3.65 to $4.60, a 26 percent increase, according to our questionnaire responses (see table 8). During this period, the minimum wage increased from $ 3.05 to $4.55, an increase of 49 percent. Because our questionnaire collected wage data as of June of each year, these data cover the first three minimum wages (in 2007, 2008, and 2009) but do not reflect the September 2010 minimum wage increase. Minimum Wage Increases in 2007-2010 Narrowed Wage Gap between Lower- and Higher-Paid Workers Employed by Questionnaire Respondents in Tourism Industry Responses to our questionnaire indicate that the timing of minimum wage increases corresponded to narrowing of the gap between the wages of lower- and higher-paid workers in the CNMI’s tourism industry (see fig. 16). Specifically, the gap between the wages of the lowest- and highest- paid hourly-wage workers of hotels and other tourism employers dropped from $1.35 in June 2007 to $0.65 in June 2010, a decline of 52 percent. Some hotel and other tourism employers said in interviews that the compression of wages had resulted in lower morale for more senior employees who now earned little more than new employees. Other employers told us that their voluntary efforts to provide pay increases to workers above the minimum wage had increased the total costs of the minimum wage increases. Minimum Wage Increases in 2010-2018 Would Affect Wages of Almost All Workers in Tourism Industry As the minimum wage increases continue, they will affect a growing percentage of workers in the CNMI’s tourism industry. Based on questionnaire responses about hotel and other tourism workers’ wages as of June 2010, 73 percent of hourly-wage workers were at the current minimum wage. The future minimum wage increases would affect the wages of 95 percent of current workers by the time the minimum wage reaches $7.25 in 2016. By 2016, the extra annual cost added by minimum wage increases after June 2010 (reflecting the 2009 increase) would be $4,707 per worker (see table 9). We identified the additional cost by calculating the difference between the cost per worker in June 2010 and the cost per worker through 2016, based on the scheduled minimum wage increases and averaged across all workers. Tourism Employers Reported Cutting Costs and Raising Prices from 2009 to 2010, but Few Attributed Their Actions Largely to the Minimum Wage Increases Hotel and other employers in the tourism industry reported in our questionnaire that they had taken cost-cutting actions, including those affecting workers’ income or benefits, and had raised prices from 2009 to 2010. While few—weighted by numbers of employees—attributed their actions largely to the minimum wage increases, some attributed hiring freezes to the minimum wage increases. Cost-Cutting Actions Affecting Workers’ Income or Benefits in 2009- 2010: Reduced overtime hours. Employers representing 96 percent of all workers employed by tourism questionnaire respondents reported that they had decreased overtime work hours for hourly-wage workers. Of these, employers representing 1 percent of workers employed by these respondents attributed the action largely to the minimum wage increases. Reduced regular hours. Employers representing 91 percent of all workers employed by tourism questionnaire respondents reported having reduced regular work hours for hourly-wage workers. Of these, employers representing 1 percent of workers employed by these respondents attributed the action largely to the minimum wage increases. Froze hiring. Employers representing 79 percent of all workers employed by tourism questionnaire respondents reported that they had implemented a hiring freeze. Of these, employers representing 40 percent of workers employed by these respondents attributed the action largely to the minimum wage increases. Decreased benefits. Employers representing 50 percent of all workers employed by tourism questionnaire respondents reported that they had decreased the level of hourly-wage workers’ benefits, while employers representing 56 percent reported that they had decreased the level of salaried workers’ benefits. Of those that reported reducing benefits of hourly-wage workers, employers representing 3 percent of workers employed by these respondents attributed the action largely to the minimum wage increases. Of those that reported reducing benefits of salaried workers, employers representing 2 percent of workers employed by these respondents attributed the action largely to the minimum wage increases. Additional Cost-Cutting Actions in 2009-2010: Implemented other labor- and cost-saving strategies or technology. Employers representing 95 percent of all workers employed by questionnaire respondents reported that they had implemented other labor- and cost-saving strategies or technology. Of these, employers representing 4 percent of workers employed by these respondents attributed the action largely to the minimum wage increases. Reduced capacity or services. Employers representing 63 percent of all workers employed by questionnaire respondents reported that they had reduced their operating capacity or customer services. Of these, employers representing 5 percent of workers employed by these respondents attributed the action largely to the minimum wage increases. Price Increases in 2009-2010: Raised prices. Employers representing 76 percent of all workers employed by tourism questionnaire respondents reported that they had raised prices of goods or services. Of these employers, none attributed the action largely to the minimum wage increases. Tourism Employers Reported Plans to Take Cost-Cutting Actions by Early 2012, and One-Half or Less Attributed Each Action Largely to the Minimum Wage Increases Hotel and other employers in the tourism industry reported in our questionnaire plans to take additional cost-cutting actions in the next 18 months, by early 2012. More employers—weighted by numbers of employees—attributed their future actions than their past actions to the minimum wage increases. Specifically, one-half or less attributed each planned action largely to the minimum wage increases. Planned Cost-Cutting Actions Affecting Workers’ Income or Benefits: Reduce overtime hours. Employers representing 93 percent of all workers employed by questionnaire respondents reported planning to decrease overtime work hours for hourly workers. Of these, employers representing 35 percent of workers employed by these respondents attributed the planned action largely to the minimum wage increases. Reduce regular hours. Employers representing 87 percent of all workers employed by tourism questionnaire respondents reported planning to reduce regular work hours for hourly-wage workers. Of these employers, none attributed the planned action largely to the minimum wage increases. Freeze hiring. Employers representing 83 percent of all workers employed by tourism questionnaire respondents reported planning to freeze hiring. Of these employers, none attributed the planned action largely to the minimum wage increases. Decrease benefits. Employers representing 63 percent of all workers employed by questionnaire respondents reported planning to decrease benefits of both hourly and salaried workers. Of those that reported planning to reduce benefits of hourly-wage workers, employers representing 1 percent of workers employed by these respondents attributed the planned action largely to the minimum wage increases. Of those that reported planning to reduce benefits of salaried workers, none attributed the planned action largely to the minimum wage increases. Lay off workers. Employers representing 62 percent of all workers employed by tourism questionnaire respondents reported planning to lay off hourly-wage workers, and employers representing 61 percent planned to lay off salaried workers. Of these employers, none attributed the planned action largely to the minimum wage increases. Additional Planned Cost-Cutting Actions: Implement other cost-saving strategies. Employers representing 88 percent of all workers employed by questionnaire respondents reported planning to implement other cost-saving strategies. Of these, employers representing 44 percent of workers employed by these respondents attributed the planned action largely to the minimum wage increases. Implement labor-saving strategies or technology. Employers representing 81 percent of all workers employed by questionnaire respondents reported planning to implement labor-saving strategies or technology. Of these, employers representing 40 percent of workers employed by these respondents attributed the planned action largely to the minimum wage increases. Reduce capacity or services. Employers representing 62 percent of all workers employed by tourism questionnaire respondents reported planning to reduce operating capacity or customer services. Of these, employers representing 51 percent of workers employed by these respondents attributed the planned action largely to the minimum wage increases. Raise prices. Employers representing 80 percent of all workers employed by tourism questionnaire respondents reported planning to raise prices of goods or services. Of these employers, none attributed the planned action largely to the minimum wage increases. Tourism Industry Employers Attributed Past and Planned Actions Largely to Factors Other than Minimum Wage Increases Hotel and other tourism industry questionnaire respondents reported that factors other than the minimum wage increases largely contributed to their past and planned actions. For example, employers representing 32 percent of workers employed by questionnaire respondents cited changes to U.S. immigration laws, and employers representing 55 percent of workers cited the decrease in numbers of customers. Employers noted that these factors contributed to their future plans as well. Specifically, employers representing 57 percent of workers employed by questionnaire respondents cited changes to U.S. immigration laws, and employers representing 25 percent of workers cited the decrease in numbers of customers. Due to the decline in visitors and to competition from other destinations, hotels have generally absorbed minimum wage costs rather than raising room rates, and they have postponed other investments and renovations that could make their properties more attractive to potential visitors. Both visitor arrivals and flight seats available to the CNMI declined from 2005 to 2010, particularly those from Japan. Industry data show that from 2006 to 2010 the CNMI hotel occupancy rate remained between 58 and 65 percent, and inflation-adjusted room rates declined. If observed trends in room and occupancy rates continue, payroll will represent an increasing share of total operating costs for hotels in the CNMI, due to the minimum wage increases. Payroll costs as a percentage of total operating costs will increase from approximately 29 percent in 2010 (with minimum wage increases representing about 1 percent of total operating costs) to 34 percent in 2016 (with minimum wage increases representing 8 percent), assuming other costs remain constant. In discussion groups, some tourism employers and managers expressed concern about the minimum wage increases, but others said the minimum wage increases were needed and manageable and that the primary difficulty was the CNMI tourism industry’s general decline. Both CNMI Visitor Arrivals and Flight Seat Availability Have Declined Visitor arrivals to the CNMI have decreased 31 percent—from 529,557 in 2005 to 368,186 in 2010. Seats available on flights to the CNMI have decreased 27 percent—from 740,673 in 2005 to 541,399 in 2010, as shown in figure 17. Arrivals account on average for 71 percent of overall flight seat capacity during this period. Airline service to the CNMI has fluctuated in recent years and remains unpredictable. For example, in September 2005, Japan Airlines discontinued service to the CNMI. Other flights have been added and subsequently removed; for example, Northwest Airlines added routes from Narita and Osaka, Japan, to the CNMI in 2005, but the Osaka flight was suspended the next year. Flights from these cities are now only available seasonally throughout the year, and the local government passed a bill providing financial incentives to travel agents in an effort to stabilize this service. A new airline, Fly Guam, established flights between the CNMI and Hong Kong in March 2011. Because of the lack of flights to and from China, Chinese visitors arrive largely on charter flights instead of regularly scheduled flights. The CNMI’s greatest declines in both visitors and flight seats by country were from Japan, which represents the largest share of visitors of any country. The Japanese market share dropped from 71 percent of the tourist arrivals in 2005 to 50 percent in 2010. In particular, the Japanese arrivals decreased 51 percent from 2005 to 2010 (from 376,263 to 182,820). Korean arrivals increased from 65,049 in 2005 to 108,079 in 2010, and the Korean market share increased from 12 percent to 29 percent in the same period. Some visitors may arrive on airlines to or from countries other than their own. For example, Korean visitors may arrive on flights from Japan. In addition, there are no flights from Russia to the CNMI; Russian travelers arrive on flights through other countries. China and Russia still have a combined share of less than 10 percent of the total tourist arrivals, but they are emerging markets, and Russia accounts for a disproportionate percentage of tourism expenditures. Due Partly to Stagnant Occupancy Rates and Declines in Inflation- Adjusted Room Rates, CNMI Hotels Have Absorbed Costs of Minimum Wage Increases Due to competition from other vacation destinations, such as Guam, and to declining visitor arrivals and occupancy rates remaining between 58 and 65 percent, economic reasoning suggests that hotels in the CNMI have limited ability to raise prices, as shown in recent stagnation in nominal hotel room rates and decline in inflation-adjusted room rates. If CNMI hotels had more flexibility in pricing, some of the costs of minimum wage increases could be passed on to consumers. However, due to the decline in visitors, hotels have generally absorbed these costs, and hotel managers said they have postponed other investments and renovations that could make their properties more attractive to potential visitors. Occupancy. Data from the Hotel Association of the Northern Mariana Islands, which covers 12 CNMI hotels, show that from 2009 to 2010, the overall occupancy rate increased by 7.5 percent, as shown in table 10. For the overall period from 2006 through 2010, the occupancy rate has no significant changes, with a slight decrease of 1.5 percent, and remained between 58 and 65 percent. Room rates. Room rates decreased by 8 percent from 2009 to 2010, as shown in table 10. For the overall period from 2006 to 2010, room rates decreased slightly, by 2 percent. When adjusted for inflation in the CNMI, real room rates declined by almost 12 percent from 2006 to 2009. Number of workers. Our questionnaire responses show that for the period from 2007 to 2010, the number of hourly hotel workers declined by 13 percent. Scheduled Minimum Wage Increases and Payroll Will Represent an Increasing Percentage of Total Operating Costs If observed trends in room rates and occupancy rates continue, payroll will represent an increasing share of total operating costs for hotels in the CNMI, due to the minimum wage increases. We estimate that for the hotels that responded to our questionnaire, the minimum wage increases from 2010 through 2016 will raise average annual payroll costs by approximately $160,528 and $983,076, respectively, from their average payroll costs in 2009. As a result, payroll costs as a percentage of total operating costs will slightly increase from approximately 28 percent in 2009, to 29 percent in 2010 (with minimum wage increases representing about 1 percent of total operating costs), to 34 percent in 2016 (with minimum wage increases representing almost 8 percent). Figure 18 shows the estimated average impact of the minimum wage increases on these hotels’ payroll costs in 2010 and 2016 (assuming that the number of employees and other operating costs remain constant). Hotel and Other Tourism Employers Said Multiple Factors Made It Difficult to Attract Increased Numbers of Visitors In discussion groups, some hotel and other tourism employers and managers expressed concern about the minimum wage increases, saying that the CNMI competed with similar tourism destinations with lower wages and was very different from the U.S. economy. Others said the minimum wage increases were needed and manageable and that the primary difficulty was the general decline in the CNMI tourism industry. Some said they had taken steps to reduce regular and overtime hours— including cutting operating hours—and to reduce the cost of benefits. They also described other cost-saving measures, including consolidating office space and cutting utility costs by reducing phone lines. Employers said CNMI tourism business had decreased, with fewer visitor arrivals and expenditures, including substantial loss of the Japanese market. They said that too few flights from key tourism countries and frequent changes in flight availability deterred visitors and led travel agents to send clients to other destinations. In addition, employers expressed concern about whether the CNMI tourism industry would retain access to foreign workers, including those with needed language and other skills, and access to visitors from China and Russia under U.S. immigration law. They expressed concern that the quality of the destination had declined and that the CNMI needed investment in new or updated attractions and hotel renovations. However, they said uncertainty about immigration rules, flight availability, and visitor arrivals had discouraged new investment. Employers said the CNMI needed more tourism promotion, possibly including incentives for airlines and assistance from the federal government. Workers participating in our CNMI discussion groups expressed mixed views regarding the minimum wage increases and said they would like pay increases but were concerned about losing jobs and work hours. Workers in the tourism industry generally expressed greater concern about the minimum wage increases than other workers and unemployed workers. Price increases. Participants said they wanted to receive minimum wage increases to help them meet rising prices, including for utilities such as power and water and for food and other consumer goods. However, they said the minimum wage increases had not kept pace with changes in the price of goods, and some said the minimum wage increases had not made a difference. Job insecurity. Workers were concerned about the impact of the wage increases on their ability to find and retain jobs, which was already difficult. They said they had observed that while some workers received pay increases, others lost their jobs or work hours. Several said they would rather keep their jobs and work hours and stay at the current wage. They also said that many people were leaving the CNMI to find work. Poverty and crime. Some said that with or without the minimum wage, people in the CNMI were suffering from poverty. People who have lost jobs or had their hours reduced rely on food stamps and other benefits, though some said they would like to find jobs rather than relying on benefits. One said he planned to find and sell cans from the street to generate income. Participants also expressed concern about rising crime rates resulting from decreased employment, and several said they had been victims of theft. Immigration. Participants said that both workers and employers were worried about the transition to U.S. immigration law, including increased immigration fees and the status of foreign workers. The text box lists some of the comments by discussion group participants. CNMI Workers’ Views Based on Discussion Groups “Groceries here are pretty expensive. Prices keep going higher and higher.” “It’s very hard to pay for everything just with our salary. Power is expensive.” “The minimum wage that was raised is good for people working. We want to try that minimum wage ourselves.” “Every time the minimum wage goes up, I notice stores raise the price of commodities.” “Minimum wage going up to $7.25 is great for workers, but at the same time is a big burden to employers.” “Minimum wage increases are useless. They cut hours so, in the end, our paychecks are the same.” “When minimum wage increased I was laid off and up to now have not been able to find a job.” “With minimum wage some are getting a benefit of higher wages, but others are losing their jobs.” “I’d rather wait for my increase than be laid off.” “Crime is skyrocketing—I’m not ok with that, but it’s because of the cost of living going up.” “Nothing changes, even with the delay in the minimum wage. People are suffering.” “Federal immigration is hurting foreign workers now that we have to pay fees to go back.” “Employers and employees are scared of the transition in the next few years. They’re all just waiting.” 1. The Department of Commerce provided technical comments in addition to the signed letter. In discussions with the Department of Commerce, we agreed to include only the signed letter and not the technical comments. 1. The American Samoa government developed its own estimates of employment loss based on the information included in our report. It concluded that American Samoa employment fell by 3,737 in 2009 and by 7,993 in 2010-2011. Our report does not include an estimate of total employment losses in 2009 because the data come from multiple sources that cannot be combined. Specifically, it is unclear the extent to which the SSA data reflect some losses of cannery jobs in addition to other job losses, so these cannot be added to cannery job losses from our industry questionnaire. In addition, the SSA data count the number of employed people, while the questionnaires count the number of jobs held at each firm. It is possible that the same person could hold positions at multiple firms. Moreover, the SSA data include workers who had earnings in American Samoa at any point in the year, while the questionnaire reflects the number of jobs in the tuna canning industry as of June of each year. Furthermore, because many of the temporary federal jobs began after our SSA counts of employment in American Samoa, and because workers can hold multiple jobs, it is unclear how the temporary federal jobs will affect employment counts based on SSA data. 2. The American Samoa government recommended in its written comments and in a January 2011 letter that GAO explore alternative methods for setting minimum wage levels in American Samoa. The government provided several alternative methods for consideration. While we considered these suggestions and summarized them in the report, our research objectives and methodology were developed in response to the legislative mandate and in discussions with Congressional requesters. These objectives and methodology were designed to provide sufficient information and analysis to support congressional deliberation on minimum wage in American Samoa and the CNMI. 3. The American Samoa government provided statements comparing the economy and minimum wage increases in American Samoa to those in the U.S. states. We agree that the minimum wage applies to a much larger proportion of American Samoa (and CNMI) workers than of workers in the U.S. states. Our report states, “In our April 2010 report, we found that before the first minimum wage increase in July 2007, 37 percent of all workers and about three-quarters of private sector workers employed by American Samoa questionnaire respondents earned wages close enough to the minimum wage to be directly affected by the first increase. In the CNMI, 18 percent of all workers and about a third of private sector workers were directly affected by the first increase. For both areas, we found that most private sector workers would be directly affected by the increases once the minimum wage reached $7.25. In contrast, according to Bureau of Labor Statistics estimates, in 2006 approximately 2.2 percent of all hourly workers in the U.S. states earned the federal minimum wage of $5.15 or less.” The report also states, “Current federal data on income and poverty levels in American Samoa do not exist; however, the most recent available data show that American Samoa had lower income and higher poverty rates than the mainland United States.” 1. The CNMI government provided information on decreases in CNMI visitor arrivals from Japan following the earthquake and tsunami in Japan. We have added this information to the existing statements on this topic in our report. 2. The CNMI government questioned that for some key past and future actions, such as reducing regular work hours and freezing hiring, no CNMI employers attributed the actions to the minimum wage increases. We note that, as stated in the report, we present the weighted percentage of employers who attributed each action to the minimum wage increases “to a large extent” (not those who attributed the action to the minimum wage increases “to a small extent” or “to a moderate extent”). 3. The CNMI government cited several limitations of the tourism industry questionnaire, as we described in this report, and recommended that the reporting method be improved to gather a clearer picture regarding minimum wage increases and to improve data integrity. However, for any questionnaire based on self-reported data, we cannot eliminate the possibility that some employers’ views of the minimum wage increases may have influenced their responses. 4. The CNMI government stated that the analyses of CNMI residents’ living standards should be strengthened, as required by congressional mandate. Although the original mandate (Pub. L. No. 111-5, § 802, 123 Stat. 115, 186, Feb. 17, 2009) specifically required us to study minimum wage effects on living standards, the current mandate (Pub. L. No. 111- 244, 124 Stat. 2618, Sep. 30, 2010) does not. However, the report includes qualitative findings related to living standards based on discussion groups with employers and with workers, as well as quantitative findings on the inflation-adjusted earnings of average and minimum wage workers. In addition to the contacts named above, Emil Friberg, Assistant Director; Mark Speight, Assistant General Counsel; Marissa Jones, analyst-in-charge; Ashley Alley; Pedro Almoguera; Benjamin Bolitzer; David Dayton; Etana Finkler; Jill Lacey; Luann Moy; Nalylee Padilla; and Vanessa Taylor made key contributions to this report. Technical assistance was provided by Holly Dye, Patrick Dudley, Kay Halpern, Dave Hancock, Michael Hoffman, Rhonda Horried, Michael Kendix, Courtney LaFountain, John Mingus, Jena Sinkfield, and Wayne Turowski. | In 2007, the United States enacted a law incrementally raising the minimum wages in American Samoa and the Commonwealth of the Northern Mariana Islands (CNMI) until they equal the U.S. minimum wage. American Samoa's minimum wage increased by $.50 three times, and the CNMI's four times before legislation delayed the increases, providing for no increase in American Samoa in 2010 or 2011 and none in the CNMI in 2011. As scheduled, American Samoa's minimum wage will equal the current U.S. minimum wage of $7.25 in 2018, and the CNMI's will reach it in 2016. Recent economic declines in both areas reflect the closure of one of two tuna canneries in American Samoa and the departure of the garment industry in the CNMI. GAO is required to report in 2010, 2011, 2013, and biennially thereafter on the impact of the minimum wage increases. This report updates GAO's 2010 report and describes, since the increases began, (1) employment and earnings, and (2) the status of key industries. GAO reviewed federal and local information; collected data from employers through a questionnaire and from employers and workers through discussion groups; and conducted interviews during visits to each area. In American Samoa, employment fell 19 percent from 2008 to 2009 and 14 percent from 2006 to 2009. Data for 2010 total employment are not available. GAO questionnaire responses show that tuna canning employment fell 55 percent from 2009 to 2010, reflecting the closure of one cannery and layoffs in the remaining cannery. Average inflation-adjusted earnings fell by 5 percent from 2008 to 2009 and by 11 percent from 2006 to 2009; however, the hourly wage of minimum wage workers who remained employed increased by significantly more than inflation. Private sector officials said the minimum wage was one of a number of factors making business difficult. In the tuna canning industry, future minimum wage increases would affect the wages of 99 percent of hourly-wage workers employed by the two employers included in GAO's questionnaire. The employers reported taking cost-cutting actions from June 2009 to June 2010, including laying off workers and freezing hiring. The employers attributed most of these actions largely to the minimum wage increases. Cannery officials expressed concern in interviews about American Samoa's dwindling global competitive advantage. Available data suggest that relocating tuna canning operations to a tariff-free country with lower labor costs would significantly reduce operating costs but reduce American Samoa jobs; however, maintaining some operations in American Samoa would allow continued competition for U.S. government contracts. Some workers said they were disappointed by the 2010 minimum wage increase delay; however, more workers expressed concern over job security than favored a wage increase with potential for layoffs. In the CNMI, employment fell 13 percent from 2008 to 2009 and 35 percent from 2006 to 2009. Average inflation-adjusted earnings rose by 3 percent from 2008 to 2009 and remained largely unchanged from 2006 to 2009. Over the same periods, the hourly wage of minimum wage workers who remained employed increased by significantly more than inflation. In discussion groups, private sector employers said minimum wage increases imposed additional costs during a time in which multiple factors made it difficult to operate. In the tourism industry, scheduled minimum wage increases through 2016 would affect 95 percent of workers employed by questionnaire respondents. Tourism employers reported that they took cost-cutting actions from June 2009 to June 2010 and planned to take additional actions, including laying off workers. Few of these tourism employers attributed past actions largely to the minimum wage increases, and one half or less did so for each of the planned actions. Available data suggest that hotels generally absorbed minimum wage costs rather than raise room rates. Hotel payroll will represent an increasing share of total operating costs due to the minimum wage increases. In discussion groups, some tourism employers expressed concern about the minimum wage increases, but others said the increases were needed and manageable and that the primary difficulty was the CNMI tourism industry's decline. Workers participating in GAO's CNMI discussion groups expressed mixed views regarding the minimum wage increases and said they would like pay increases but were concerned about losing jobs and work hours. GAO shared the report with relevant federal agencies and the governments of American Samoa and the CNMI. While generally agreeing with the findings, they raised a number of technical concerns that have been incorporated as appropriate. |
The DI and SSI programs are the two largest federal programs providing assistance to people with disabilities. DI is the nation’s primary source of income replacement for workers with disabilities who have paid Social Security taxes and are entitled to benefits. The DI program also pays benefits to disabled dependents of disabled, retired, or deceased workers—disabled adult children and disabled widows and widowers. SSI provides assistance to disabled people who have a limited or no work history and whose income and resources are below specified amounts.State disability determination service (DDS) agencies, which are funded by SSA, decide whether individuals applying for DI or SSI benefits are disabled. Federal laws specify those who must receive CDRs. The 1980 amendments to the Social Security Act require that SSA review at least every 3 years the status of DI beneficiaries whose disabilities are not permanent to determine their continuing eligibility for benefits. The law does not specify the frequency of the required reviews for beneficiaries with permanent disabilities. The Social Security Independence and Program Improvements Act of 1994 requires that SSA conduct CDRs on one-third of the SSI beneficiaries who reach age 18 and a minimum of 100,000 additional SSI beneficiaries annually in fiscal years 1996 through 1998. The 1996 amendments to the Social Security Act require that SSA conduct CDRs (1) at least every 3 years for children under age 18 who are likely to improve or, at the option of the Commissioner, who are unlikely to improve and (2) on low-birth-weight babies within their first year of life. The 1996 legislation also requires disability eligibility redeterminations, instead of CDRs, for all 18-year-olds beginning on their 18th birthdays, using adult criteria for disability. State DDS agencies set the frequency of CDRs for each beneficiary according to his or her outlook for medical improvement, which is determined on the basis of impairment and age. Beneficiaries expected to improve medically, classified as “medical improvement expected” (MIE), are scheduled for review at 6- to 18-month intervals; beneficiaries classified as medical improvement possible (MIP) are scheduled for review at least once every 3 years; and those classified as medical improvement not expected (MINE) are scheduled for review once every 5 to 7 years. For almost a decade, because of budget and staffing reductions and competing priorities, SSA has been unable to conduct all the DI CDRs required by the Social Security Act. Moreover, the agency has conducted relatively few elective SSI CDRs. (See tables III.1 and III.2 for numbers of previous CDRs conducted and CDR funding.) In 1996, the Congress authorized about $3 billion for CDRs for fiscal years 1996 through 2002. In addition, SSA plans to earmark over $1 billion in its administrative budget for CDRs during that same time period. The DI and SSI programs have about 4.3 million beneficiaries due or overdue for a CDR in fiscal year 1996. About 2.5 million of these reviews are required by law, including about 2.4 million DI CDRs and 118,000 SSI CDRs. SSA is authorized, but not required by law, to conduct the remaining CDRs. As shown in table 1, about half of all beneficiaries are awaiting CDRs, the largest category of which is disabled workers receiving DI benefits. SSA calculated a smaller number of CDRs due or overdue of about 1.4 million DI beneficiaries and 1.6 million SSI beneficiaries. It excluded from its calculation DI worker beneficiaries aged 59 and older, disabled widows and widowers and disabled adult children of DI worker beneficiaries, and SSI beneficiaries aged 59 and older. SSA officials acknowledged that CDRs are required for all of the DI beneficiaries it has excluded, but stated that, because of the backlog, the agency is focusing its attention on the portions of the CDR population that it estimates are more cost-effective to review. In general, DI worker beneficiaries and adult SSI beneficiaries in the backlog have similar characteristics, and SSA estimates a low likelihood of benefit termination as a result of medical improvement. On average, workers receiving DI and adult SSI beneficiaries have been receiving benefits for over 9 years and their predominant disability is mental disorders. While both are middle-aged, the average SSI adult beneficiary is about 9 years younger than the average DI worker beneficiary. In addition, the average estimated likelihood of benefit termination for DI and SSI MIE and MIP beneficiaries under age 60 is less than 5 percent. More data on DI and SSI characteristics are provided in tables IV.1 through IV.12. SSA uses two types of CDRs, a full medical CDR and a mailer CDR, to review beneficiaries’ status. The full medical CDR process is labor-intensive and generally involves (1) one of 1,300 SSA field offices to determine whether the beneficiary is engaged in any substantial gainful activity (SGA) and (2) one of 54 state DDS agencies to determine whether the beneficiary continues to be disabled, a step that frequently involves examination of the beneficiary by at least one medical doctor. Beginning in 1993, questionnaires—called mailer CDRs—replaced full medical CDRs for some beneficiaries to increase the cost-effectiveness of the CDR process. SSA also developed statistical formulas for estimating the likelihood of medical improvement and subsequent benefit termination based on computerized beneficiary information such as age, impairment, length of time on the disability rolls, and date of last CDR. For beneficiaries for whom application of the formulas indicates a relatively low likelihood of benefit termination, SSA uses a mailer CDR; when the formula application indicates a relatively high likelihood of benefit termination, SSA uses a full medical CDR. For those who receive mailer CDRs, SSA takes an additional step to determine whether responses to a mailer CDR, when combined with data used in the formulas, indicate that medical improvement may have occurred; in this small number of cases, the beneficiary is also given a full medical CDR. Individuals who have responded to a mailer CDR and are found to be still disabled are not referred for full medical CDRs, and SSA sets a future CDR date. Currently, SSA estimates that the average cost of a full medical CDR is about $1,000, while the average cost of a mailer CDR is between about $25 and $50. (See app. II for more details on the steps in the CDR process.) SSA does not include in its selection process all DI and SSI beneficiaries. SSA limits its selection process to those beneficiary categories it considers cost-effective to review on the basis of their potential for medical improvement. Approximately one-half of the DI and SSI beneficiaries currently due for CDRs are included in SSA’s process for estimating the likelihood of benefit termination through the use of statistical formulas; these estimates are the basis of selection for CDRs. Adult beneficiaries that SSA includes in its selection process are DI worker and SSI beneficiaries under age 59 who have been classified as MIEs or MIPs. SSA currently excludes MINE beneficiaries, beneficiaries aged 59 and older, and disabled adult children and disabled widows and widowers of DI worker beneficiaries from its estimation process because it considers these categories not cost-effective to review. While SSA considers some SSI child beneficiaries cost-effective to review, children are currently selected for CDRs without the use of formulas to estimate the likelihood of benefit termination. (See fig. 1 and table III.4.) The development and use of formulas reflect SSA’s effort to make the CDR process more cost-effective by using the estimates to identify beneficiaries who should receive a mailer CDR and those who should receive a full medical CDR. However, SSA acknowledges that the formulas are not useful for estimating the likelihood of benefit termination for most beneficiaries in this process. The formulas are primarily useful for identifying beneficiaries who SSA estimates are most or least likely to have their benefits terminated from a CDR. For individuals who fall in the middle category—which constitutes the majority of beneficiaries included in the estimation process—the formulas provide less accurate estimates, according to SSA. At this time, SSA does not select for CDRs any beneficiaries from this middle group because it is unable to determine whether a mailer or a full medical CDR is most appropriate for these beneficiaries. According to SSA, if it conducted mailer CDRs on the middle group, this would likely result in more beneficiaries being subsequently referred for full medical CDRs than the agency can accommodate in its budget. Similarly, if it conducted full medical CDRs on the middle group, it would be using a higher-cost process than SSA believes is necessary for some in this group. (See fig. 2 and table III.5.) Consequently, SSA selects a portion of the beneficiaries with the highest and lowest estimated likelihood of benefit termination for full medical and mailer CDRs, respectively. SSA has not developed statistical formulas to use in selecting SSI child and 18-year-old beneficiaries for CDRs. According to SSA, it selected low-birth-weight babies for CDRs of children for fiscal year 1996 because historically about 40 percent of this category have benefits terminated as a result of a CDR. Selecting low-birth-weight babies for CDRs is also consistent with CDR requirements that take effect in fiscal year 1997. For 18-year-old SSI beneficiaries in fiscal year 1996, SSA selected a judgmental sample classified as either MIE or MIP who had characteristics associated with a high likelihood of benefit termination. For fiscal year 1996, all reviews of child and 18-year-old SSI beneficiaries are to be full medical CDRs. Recognizing the need to improve the current process, SSA plans to expand and enhance its procedures for selecting beneficiaries for CDRs and conducting the reviews. Furthermore, SSA told us that these planned process improvements will limit the extent to which SSA can conduct the planned number of CDRs and reduce the CDR backlog. SSA plans to include more beneficiary categories in its selection process by expanding the use of the statistical formulas for certain MINE-classified beneficiaries and children and enhancing the formulas. Beginning in fiscal year 1997, according to SSA, formulas will be used for those beneficiaries who are classified as MINEs because they are older rather than because of their impairment. SSA also plans to develop formulas to use for children receiving SSI beginning in about fiscal year 1998. According to SSA, postponing the development of formulas for SSI child beneficiaries will allow the agency to integrate this process improvement with the knowledge it will gain about impairments that afflict children as a result of the new requirement to conduct CDRs for children in the SSI program beginning in fiscal year 1997. SSA also plans to pursue two approaches for the collection of medical treatment information about beneficiaries. First, SSA plans to obtain Medicare and Medicaid data and integrate the data into the statistical formulas to increase the validity of the estimated likelihood of benefit termination. SSA expects that the additional information will allow it to better determine the appropriateness of either mailer or full medical CDR for beneficiaries with estimates of benefit termination in the middle range. Second, SSA plans to develop a new type of CDR that would be conducted by mail to obtain current information about a beneficiary’s disability and treatment. Unlike the current mailer CDR, the new type of CDR would collect information directly from beneficiaries’ physicians and other medical treating sources. This information will be combined with computerized beneficiary data to help identify the beneficiaries in the middle range who are most likely to be no longer disabled and therefore warrant full medical CDRs. In the past year, new legislation has increased authorized funding for CDRs to about $3 billion by 2002, but has also required CDRs for some SSI beneficiaries for whom the reviews were previously elective. Because SSA has not finished incorporating the new CDR requirements into its plans, it is too early to determine whether the authorized funding will be adequate for all required CDRs. However, exclusions from the estimates SSA used regarding the size of the backlog in fiscal year 1996, SSA’s need to complete process improvements in order to conduct a greater number of CDRs, and other challenges all contribute to the uncertainty that SSA will be able to be current with required CDRs within 7 years. Funding for CDRs from all sources could exceed $4 billion by 2002. The bulk of the funding for CDRs is authorized by the Contract With America Advancement Act of 1996, which authorized about $2.7 billion between 1996 and 2002. While the funding is primarily for DI CDRs, a portion can be used for SSI CDRs. Most recently, the 1996 amendments to the Social Security Act authorized a total of about $250 million for SSI CDRs and medical eligibility redeterminations in fiscal years 1997 and 1998. For the first time in 1996, SSA designated $200 million of its administrative budget to be used solely to conduct CDRs. By comparison, SSA spent almost $69 million to conduct CDRs in fiscal year 1995. SSA expects to continue to earmark moneys in future budgets at the same level as fiscal year 1996. (See table III.2 for SSA’s CDR spending in past years.) SSA’s plan to conduct CDRs on 8,182,300 beneficiaries between 1996 and 2002 is ambitious. The plan, as of August 1, 1996, called for SSA to conduct nearly twice as many CDRs as it has conducted over the past 20 years combined. If the plan is fully implemented, SSA will conduct the CDRs for DI worker beneficiaries under age 59, the beneficiary category the plan defines as constituting the DI backlog. In addition, it will conduct about 350,000 SSI CDRs required under the Social Security Independence and Program Improvements Act of 1994 and about 2 million additional elective SSI CDRs. (See table III.6 for the number of full medical and mailer CDRs SSA plans to conduct.) SSA’s plan reflects increased authorizations from the Contract With America Advancement Act but does not yet account for the increased authorizations or increased CDRs and related work required by the 1996 amendments to the Social Security Act. SSA’s estimate of the size of the DI CDR backlog in fiscal year 1996 excludes about 848,000 beneficiaries, composed of disabled widows and widowers, disabled adult children, and workers aged 59 and older. SSA officials acknowledge that CDRs are required for these beneficiaries, but SSA has excluded them from the plan because it focuses on those categories SSA considers more cost-effective to review. In addition, an SSA official said that a large number of beneficiaries in the excluded categories are expected to leave the program because either they will die or convert to retirement benefits before SSA can conduct their CDRs. However, SSA has not estimated the proportion of excluded categories who may leave the program, nor does it include in its plan beneficiaries in these categories who will come due for CDRs in fiscal years 1997 through 2002. Process improvements are critical to SSA’s ability to implement the portion of the plan that relies on the mailer CDR, a component whose use is planned to triple in fiscal year 1998. SSA’s success with the mailer CDR will rely on yet-to-be-tried improvements. Although plans to expand the formulas to more beneficiary categories and collect medical treatment information appear promising, some improvements are in the earliest stages of development with only about 1 year available for completion. Thus, SSA will need to develop these initiatives more quickly than it did previous improvements. The integration of Medicare and Medicaid data into the formulas used to estimate the likelihood of benefit termination, and the development of a new type of CDR that collects information from physicians and other medical treating sources, are expected to allow SSA to begin conducting CDRs on beneficiaries with an estimated benefit termination in the middle range. SSA said that it currently is unable to determine whether the beneficiaries with estimates in the middle range should have a full medical CDR or a mailer CDR. Without that ability, SSA cannot determine the most cost-effective type of CDR to use, and its planned expansion of the use of the mailer CDR will be in jeopardy. SSA faces a variety of other challenges to the implementation of its plan and the elimination of the backlog of required CDRs: First, SSA must incorporate into its workload SSI CDRs and disability eligibility redeterminations required by the 1996 amendments to the Social Security Act. These requirements include performing CDRs once every 3 years for children under 18 years old who are likely to medically improve and for all low-birth-weight babies by their first birthday. This law also requires SSA to conduct disability eligibility redeterminations on all child beneficiaries who turn 18 years old, within 1 year of their birthday, and for between 300,000 and 400,000 children who qualified for SSI under individualized functional assessments (IFA). These reviews of children would take precedence over required CDRs and may shift resources away from other CDRs. The law also changes SSI eligibility for legal aliens who have not resided in this country for 5 years before receiving benefits, necessitating CDRs of the beneficiaries to determine continuing eligibility. Second, other recent legislation poses a competing priority. The law eliminates drug and alcohol abuse as a basis for receiving disability benefits; as a result, benefits will terminate for many of an estimated 196,000 DI and SSI beneficiaries whose primary impairments are drug abuse and/or alcoholism. SSA expects many of those terminated to reapply on the basis of other impairments, thus increasing SSA’s workload of initial claims for benefits. Previous increases in initial claims adversely affected the number of CDRs conducted as resources were shifted away from that activity to process initial applications. Third, SSA’s plan includes doing CDRs for many of the estimated 3.7 million SSI beneficiaries whose CDRs may be conducted at SSA’s discretion. While conducting these discretionary SSI reviews may be warranted largely because relatively few SSI CDRs have been conducted in the past, it shifts resources away from conducting required DI reviews. Fourth, the daunting effort to gear up for the unprecedented CDR workload will include negotiations between SSA and 50 state DDS agencies to increase CDR workloads and DDS efforts to hire, train, and supervise additional staff. In the Contract With America Advancement Act, the Congress emphasized maximizing the combined savings from CDRs under the DI and SSI programs. SSA has been working to improve its ability to identify beneficiaries for whom conducting CDRs would be most cost-effective. Other alternatives exist, however, that would likely make CDRs more cost-effective and improve program integrity. The current system of periodic CDRs for all beneficiaries, including those with virtually no potential for medical improvement, is a costly approach for identifying the approximately 5 percent of beneficiaries who medically improve to the point of being found ineligible for benefits. Furthermore, the frequency of CDRs is currently based on medical improvement classifications that do not clearly differentiate between those most and least likely to have their benefits terminated as a result of a CDR. Our analysis found that the estimated likelihood of benefit termination, as determined by SSA’s formulas, was very similar for beneficiaries classified as MIEs and MIPs. Although millions of dollars are spent annually to conduct periodic CDRs, some beneficiaries, especially those in the DI program, have received benefits for years without having any contact with SSA regarding their disability or their ability to return to work despite continuing disability. An alternate approach could build on SSA’s efforts to identify those beneficiaries whose CDRs are likely to be cost-effective and also increase contact with beneficiaries who remain in the program. Such an approach involves requiring (1) CDRs of beneficiaries with the greatest potential for medical improvement, (2) CDRs of a random sample from all other beneficiaries, and (3) regular contact with the remainder of the beneficiaries to increase program integrity. Less rigid requirements regarding the frequency of CDRs are necessary if reviews are to be conducted primarily on those beneficiaries whose cases are cost-effective to review—that is, those beneficiaries with the greatest potential for medical improvement—and for SSA to still be in compliance with laws governing CDRs. According to SSA, one of the best indicators of whether beneficiaries will remain on disability rolls is whether they have previously undergone a CDR. If an initial CDR finds that the beneficiary continues to be medically eligible for disability benefits, subsequent CDRs may not be cost-effective or appropriate. Because few CDRs actually result in benefit terminations, periodic reviews, even at the maximum 3- and 7-year intervals currently used, may not be appropriate for certain beneficiaries if further reviews are not warranted after the initial CDR and at least several years on the disability rolls. Conducting CDRs on a random sample of beneficiaries from among those whose cases are believed by SSA to be less cost-effective to review is consistent with a more cost-effective and flexible approach to scheduling CDRs. It also addresses a weakness in SSA’s current process by ensuring overall program integrity. SSA’s current process excludes some categories of beneficiaries from portions of the selection process. As a result, about one-half of all beneficiaries due for a CDR will go without oversight unless SSA changes its selection process. If periodic CDRs are not conducted for all beneficiaries, it is increasingly important to develop a strategy to ensure overall program integrity. Contact with beneficiaries, in addition to the contact that occurs in the CDR process, can improve program integrity by reminding beneficiaries that their medical conditions are being monitored and serving as a deterrent to abuse by those no longer medically eligible for benefits. It could also support SSA’s process improvement efforts, particularly within the next year. We believe that a new type of brief mailed contact would, at a minimum, in the year it is implemented, allow SSA to contact a majority of beneficiaries with overdue CDRs to remind them of their responsibility to report medical improvements and to inquire about their interest in returning to work. By collecting CDR-related information as part of this new contact, it could also speed the development of SSA’s planned improvements to the CDR process. For example, SSA could gather information on physicians and other treating sources seen by beneficiaries since their last CDR. Such information is needed to implement SSA’s new medical treating source CDR. SSA has not evaluated this three-pronged proposal for improving the CDR process, but in our discussions with agency officials, some provided comments on one aspect of it. In discussing additional, more frequent contact with beneficiaries in addition to that which occurs during a CDR, several officials raised the issue of the cost of such an initiative. Although some administrative funds would be used for this contact, it should result in significant savings because a considerable number of beneficiaries, on the basis of SSA’s experience, can be expected to refuse repeatedly to provide requested information and, as a result, will have their benefits terminated after a prescribed due-process procedure is followed.According to SSA, those who fail to cooperate generally do so because they believe that they are no longer eligible for benefits. On the basis of SSA’s experience with CDRs and financial eligibility redeterminations, we assumed that .71 percent of the DI beneficiaries and 1 percent of the SSI beneficiaries who were contacted would have their benefits terminated for noncooperation after all due-process procedures were followed. These termination rates represent an estimated one-time net federal savings of over $1.4 billion from contacting beneficiaries in the CDR backlog, with DI savings accounting for about $1.2 billion and SSI savings accounting for about $230 million. If extended to all beneficiaries not receiving CDRs or financial eligibility redeterminations, the costs and subsequent savings from such a contact would likely be higher. See appendix I for a further discussion of our estimated savings. Time-limiting disability benefits has been proposed as a way to reduce beneficiaries’ dependence on cash benefits by removing them from the rolls after set periods of time. Time limits are intended to encourage beneficiaries to obtain treatment and pursue rehabilitation to overcome their disabling conditions and obtain productive employment. Proposals for time-limited benefits generally establish criteria for deciding which categories of beneficiaries would be subject to time limits and no longer subject to required CDRs. Some believe that such broad application of time limits could significantly reduce the number of people who would continue on the rolls indefinitely and eliminate the CDR backlog. However, others believe that it could create a large backlog of disability claims when those who are terminated because of the time limit reapply for benefits. Time limits are also thought to increase the number of people on the rolls because SSA and DDS staff may, in certain cases, be more likely to award benefits because of the limited payment period. Instead of subjecting all beneficiaries with nonpermanent impairments to time limits, some believe that time limits should be applied to certain subsets or categories of beneficiaries—those with impairments that are likely to improve with treatment or surgery. Such impairments include affective disorders, tuberculosis, certain fractures, and orthopedic impairments for which surgery can restore or improve function. However, our analysis of the characteristics of those in the CDR backlog suggests that implementing time-limited benefits on the basis of either medical improvement classifications or specific impairments is not currently feasible. As explained earlier, on the basis of our analysis of available CDR population characteristics, there is little correlation between the MIE and MIP classifications and the estimated likelihood of benefit termination. Moreover, our analysis did not associate any specific impairment or other characteristic with a greater likelihood of benefit termination. Furthermore, SSA and the NASI disability policy panel concluded that the MIE, MIP, and MINE classifications do not accurately reflect the likelihood of medical improvement and subsequent benefit termination. The CDR process has the potential to be used to further SSA’s return-to-work initiatives, strengthening that effort and offering greater opportunity for beneficiaries to become self-sufficient despite their continuing disabilities. While the Social Security Act states that as many individuals as possible applying for benefits under the DI program should be rehabilitated into productive activity, only about 8 percent of DI and SSI beneficiaries are referred for vocational rehabilitation (VR) services. SSA generally does little during the CDR process to determine beneficiaries’ VR needs and provide assistance to help beneficiaries become self-sufficient. Although in conducting full medical CDRs SSA obtains information from the beneficiary on VR services received since the initial application or last CDR, SSA and DDS staff are neither required nor instructed to assess beneficiaries’ work potential, make beneficiaries aware of rehabilitation opportunities, or encourage them to seek VR services. When conducting mailer CDRs, SSA provides beneficiaries the opportunity to indicate an interest in VR services. In our April 1996 report, we noted that medical advances and new technologies are creating more opportunities than ever for disabled people to work, and some beneficiaries who do not medically improve may nonetheless be able to engage in substantial gainful activity. Yet, weaknesses in the design and implementation of DI and SSI program components have limited SSA’s capacity to identify and assist in expanding beneficiaries’ productive capacities. Beneficiaries receive little encouragement to use rehabilitation services. We recommended in that report that the Commissioner of Social Security take immediate action to place greater priority on return to work, including designing a more effective means to identify and expand beneficiaries’ work capacities and better implementing existing return-to-work mechanisms. Our analysis of the characteristics of beneficiaries awaiting DI and SSI CDRs supports SSA’s conclusion that there is little likelihood a large proportion of beneficiaries will show sufficient medical improvement to no longer be disabled. Therefore, if SSA is to decrease long-term reliance on these programs as the primary source of income for the severely impaired, it will need to shift its emphasis. It must rely less on assessing medical improvement and more on return-to-work programs to better gauge the potential for self-sufficiency despite the lack of medical improvement. SSA’s plan to conduct repeated CDRs at regularly scheduled intervals may not be warranted for some beneficiaries, given the large number of beneficiaries with little likelihood of benefit termination and the emphasis on cost-effectiveness in the Contract With America Advancement Act. A more cost-effective approach might incorporate (1) a focus on conducting CDRs for beneficiaries with the greatest likelihood of benefit termination due to medical improvement, (2) conducting CDRs on a random sample of all other beneficiaries to correct a weakness in SSA’s process, and (3) contact with beneficiaries not selected for a CDR or a financial eligibility redetermination to strengthen program integrity. However, for this cost-effective approach to work, SSA needs to be able to accurately estimate the likelihood of benefit termination for all beneficiaries. Currently, our analysis shows that about one-half of all beneficiaries due or overdue for a CDR have been excluded from SSA’s process that utilizes formulas to estimate the likelihood of benefit termination. Furthermore, for many beneficiaries, the formulas result in less accurate estimates. If SSA is to be current with CDRs by 2002, it will need to meet many challenges, including expanding the use of its mailer CDR. Because such an expansion is dependent upon SSA’s ability to implement at least two of its planned process improvements, this raises further questions about SSA’s ability to implement its plan. We recommend that, to the extent SSA is authorized to act, the Commissioner of SSA replace the routine scheduling for CDRs of all who receive DI and SSI program benefits with a more cost-effective process that would (1) select for review beneficiaries with the greatest potential for medical improvement and subsequent benefit termination, (2) correct a weakness in SSA’s CDR process by conducting CDRs on a random sample from all other beneficiaries, and (3) help ensure program integrity by instituting contact with beneficiaries not selected for CDRs. As part of this effort, the Commissioner should develop a legislative package to obtain the authority the agency needs to enact the new process for those portions of the DI and SSI populations that are subject to required CDRs. To enable as many disabled individuals as possible to become self-sufficient, SSA should test the use of CDR contacts with beneficiaries to determine individuals’ rehabilitation service needs and help them obtain the services and employment assistance they need to enter or reenter the workforce. In commenting on a draft of this report, SSA agreed to test the use of CDR contacts with beneficiaries to determine individuals’ rehabilitation service needs and help them obtain the services and employment assistance they need to enter or reenter the workforce. SSA also agreed to begin to consider changing the current statutory requirements for CDRs as part of its effort to continually seek ways to maintain stewardship of the disability program in the most cost-effective manner. However, it disagreed with our recommendation on specific changes it should make to the CDR process. In particular, it disagreed with conducting CDRs on random samples of beneficiaries who are less cost-effective to review and with making more frequent contact with all beneficiaries. We continue to believe that ensuring program integrity requires that all beneficiaries have an opportunity to be selected for a CDR. In addition, we believe that efforts to monitor disability status will serve as a deterrent to abuse by those no longer medically eligible for benefits, and that maintaining periodic contacts with all beneficiaries is a sound management practice. SSA also made technical comments on our report, which we incorporated as appropriate. The full text of SSA’s comments and our responses are contained in appendix V. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of the report until 7 days after the date of this letter. At that time, we will send copies to the Commissioner of Social Security. We will make copies available to others on request. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix VI. This appendix provides additional details concerning our methodology. Information is included about databases used in estimating for the DI and SSI programs the number of beneficiaries due or overdue for a CDR in fiscal year 1996 and analyzing their characteristics. We also include information on our calculations of the potential one-time savings from our proposed mailed contact to collect CDR-related information from beneficiaries. We analyzed the electronic databases as provided to us by SSA officials but did not evaluate the validity of the databases or the SSA formulas used to estimate the likelihood of benefit termination. We did our review from September 1995 to August 1996 in accordance with generally accepted government auditing standards. To determine the number of DI worker beneficiaries currently due or overdue for a CDR, we used SSA’s Office of Disability’s (OD) CDR database and the Master Beneficiary Record (MBR). OD’s database contains information on all beneficiaries SSA has determined were due or overdue for a CDR in fiscal year 1996. We eliminated records for DI beneficiaries who were included in OD’s database but whose MBR could not be found or who did not meet the definition of being due or overdue for a CDR in fiscal year 1996. The eliminated records primarily involved cases that were not due for a CDR until the next century and were incorrectly included in the backlog population. Table I.1 contains initial and final population sizes after adjustments. OD provided the number of disabled widows and widowers and disabled adult children in the backlog but did not supply other information about them. To determine the number of SSI beneficiaries currently due or overdue for a CDR, we used OD’s database that contains information on all SSI beneficiaries SSA has determined were due or overdue for a CDR in fiscal year 1996. We drew a random sample of 15 percent of these beneficiaries stratified by whether the (1) beneficiary was an adult or a child and (2) state disability determination services (DDS) had classified the likelihood of medical improvement as expected (MIE), possible (MIP), or not expected (MINE). We eliminated from our sample beneficiaries whose CDR due dates were after fiscal year 1996 or who were over 65. On the basis of our sample data, we estimated the size of the population with these exclusions. Table I.2 contains initial population and sample sizes and final sizes after adjustments. For the population of DI workers, we obtained information on characteristics from the MBR and OD’s CDR database. From the MBR, we obtained information on age, gender, race, impairment, time receiving benefits, and time overdue for a CDR. Because information obtained from OD did not differentiate between MIE and MIP beneficiaries, we used MBR data to classify beneficiaries in the two categories. From OD’s CDR database, we obtained information on (1) records for all those classified as MINE and (2) estimates of the likelihood of benefit termination for MIE and MIP beneficiaries, the only categories for which likelihood data were available. We did not analyze the characteristics of DI beneficiaries who are disabled widows and widowers and disabled adult children because we did not have sufficient information to identify them in the MBR. For the sample of SSI beneficiaries, we obtained information on characteristics from SSA’s Supplemental Security Income Record Description (SSIRD) and OD’s CDR database. From the SSIRD, we obtained information on age, gender, race, impairment, time receiving benefits, and time overdue for a CDR. We also used SSIRD data to classify adults into MIE and MIP categories. From OD’s CDR database, we obtained information on (1) medical improvement classifications for all children and MINE adults; (2) records for all adults classified as MINE; and (3) estimates of the likelihood of benefit termination for adult MIE and MIP beneficiaries, the only categories for whom likelihood data were available. Because we used a sample to estimate characteristics of the universe of SSI beneficiaries due or overdue for CDRs in fiscal year 1996, the reported estimates in tables IV.7 through IV.12 have sampling errors associated with them. Sampling error is variation that occurs by chance because a sample was used rather than the entire population. The size of the sampling error reflects the precision of the estimate—the smaller the sampling error, the more precise the estimate. The tables in appendix IV contain sampling errors for reported estimates calculated at the 95-percent confidence level. This means that the chances are about 95 out of 100 that the range defined by the estimate, plus or minus the sample error, contains the true percentage. With few exceptions, the sampling errors were less than 1 percentage point. This means that for most percentages, there is a 95-percent chance that the actual percentage falls within plus or minus 1 percentage point of the estimated percentage. Our estimate of a one-time savings associated with our recommendation to begin a process for more frequent contact with beneficiaries who are not selected for either a CDR or a financial eligibility redetermination during the year is based on the following SSA costs and savings estimates and assumptions. The number of DI beneficiaries who would be contacted by this initiative was estimated by subtracting the number of DI CDRs planned for fiscal year 1996 from the DI population due or overdue for CDRs as of fiscal year 1996. For the SSI program, the number of beneficiaries who would be contacted by this initiative was estimated by subtracting the estimated number of SSI beneficiaries who would receive either a financial eligibility redetermination or a CDR from the SSI population currently due or overdue for CDRs as of fiscal year 1996. We assumed that the percentage of beneficiaries who would fail to cooperate with this initiative would be the same as the most recent SSA estimates for DI CDRs and SSI financial eligibility redeterminations. We used savings estimates resulting from DI benefit terminations as provided by the Office of the Actuary. To estimate federal savings from SSI benefit terminations, we used estimates provided by SSA’s Office of the Actuary and the Department of Health and Human Services’ Health Care Financing Administration for adult beneficiaries, and offsetting cost estimates to account for the resultant increase in food stamps. Because these SSI beneficiaries would be contacted for financial eligibility redeterminations within the next 5 years, the SSI estimates we used reflect only 5 years of savings and offsetting food stamps. Because many DI beneficiaries who have been receiving benefits for years may never have been contacted for CDRs, the DI estimates we used reflect a lifetime of savings. As a proxy for the cost of the mailer, we used an SSA estimate of the cost of the current nonscannable mailer. Because this figure overestimates the cost of a scannable mail contact, it provides a conservative estimate, including some administrative and developmental costs. Calculation of number of beneficiaries expected to be dropped from the programs Beneficiaries due or overdue for CDRs in fiscal year 1996 Less: planned financial eligibility redeterminations for those who are not receiving a CDR Beneficiaries not contacted during the year Multiplied by: percentage of beneficiaries who fail to cooperate Total beneficiaries expected to be dropped from the program Per-beneficiary savings and offsetting costs Gross savings to DI trust fund/SSI program Gross savings to Medicare/federal portion of Medicaid Less: offsetting costs of additional food stamps Net savings per beneficiary dropped from the program Total estimated savings to the federal government Net program savings (number of beneficiaries dropped multiplied by net savings per beneficiary) Less: cost of sending scannable mailer (number of beneficiaries contacted at $25) Total estimated net savings from proposed initiative (combined total = $1,477,236,040) This appendix provides details on SSA’s procedures for conducting CDRs. More specifically, we (1) outline the process for conducting full medical CDRs and (2) discuss SSA’s use of mailer CDRs. Generally, a full medical CDR is used to determine with certainty whether a beneficiary has medically improved to the point that the person is no longer disabled and should be removed from the disability rolls. The full medical CDR process is labor-intensive and generally involves (1) one of 1,300 SSA field offices to determine whether the beneficiary is engaged in any substantial gainful activity (SGA), and (2) one of 54 state DDS agencies to determine whether the beneficiary continues to be disabled, a step that frequently involves examination of the beneficiary by at least one medical doctor. A full medical CDR generally follows an eight-step evaluation process (see fig. II.1). Figure II.1: Eight-Step Evaluation Process for a Full Medical CDR Step 1 Is beneficiary engaged in substantial gainful activity? Step 2 Does impairment meet or equal severity as defined in medical listing? Step 3 Has medical improvement (MI) occurred? Step 5 Does an exception to MI apply? Step 4 Is MI related to ability to work? Step 6 Is impairment severe? Step 7 Is beneficiary able to perform work done in past? Step 8 Based on SSA guidelines, is beneficiary able to perform other work? Determination Beneficiary remains disabled and benefits continue. Determination Beneficiary is no longer disabled and benefits terminate. If an exception to MI applies in which the initial determination was fraudulently obtained or the beneficiary does not cooperate with SSA, benefits are terminated. At step one, the SSA field office determines whether the beneficiary is engaged in SGA. Field office staff contact the beneficiary, often through a face-to-face meeting, and obtain information on the person’s condition, medical treating sources, and the effect of the impairment on the beneficiary’s ability to perform SGA. This information describes any changes that have occurred since the initial application or most recent CDR and includes types of treatment received, medicines received, specialized tests or examinations, vocational rehabilitation services received, and any schools or training classes attended since the last medical determination. The SSA field office also obtains information on any work activities since the person became disabled, whether the condition continues to interfere with the ability to work, and whether the beneficiary has been released for work by the treating physician. Benefits are terminated for beneficiaries engaged in SGA, regardless of medical condition. A beneficiary found to be not working or working but earning less than SGA has his or her case forwarded to the state DDS office. At step two, the DDS compares the beneficiary’s condition with the Listing of Impairments developed by SSA. The listings contain over 150 categories of medical conditions that, according to SSA, are severe enough ordinarily to prevent a person from engaging in SGA. The DDS obtains medical evidence from the sources who treated the beneficiary during the 12 months prior to the CDR. If the medical evidence provided is insufficient for a disability decision, the DDS will arrange for a consultative examination by an independent doctor. A beneficiary whose impairment is cited in the listings or whose impairment is at least as severe as those impairments in the listings, and who is not engaged in SGA, is found to be still disabled. At step three, a beneficiary whose impairment is not cited in the listings or whose impairment is less severe than those cited in the listings is evaluated further to determine whether there has been medical improvement (MI). MI is defined as any decrease in medical severity of the impairment(s) present at the time of the most recent medical determination. In deciding whether MI has occurred, the DDS considers changes in symptoms, signs, and/or laboratory findings and determines whether these changes reflect decreased medical severity of the impairment(s). If MI has not occurred, the DDS skips step four and proceeds to step five to consider whether any exceptions to MI apply. At step four, for beneficiaries for whom MI has occurred, the DDS determines whether MI is related to the ability to work. MI relates to the ability to work when there is an increase in a person’s residual functional capacity (RFC) to do basic work activities compared with the person’s RFC at the last medical determination. When MI does not relate to the ability to work, the DDS proceeds to step five. If MI relates to the ability to work, the DDS goes to step six. At step five, the DDS determines whether exceptions to MI apply. Exceptions provide a way for SSA to find a beneficiary no longer disabled in certain limited situations even though there is no decrease in the severity of the impairment. There are two exceptions to MI. The first exception applies to certain situations in which the person can engage in SGA—for example,when substantial evidence shows that advances in medical or vocational therapy or technology have favorably affected the severity of a beneficiary’s impairment or RFC to do basic work activities. The second exception can apply without regard to the person’s ability to engage in SGA—for example, in situations in which the prior determination was fraudulently obtained or in which the beneficiary fails to cooperate with SSA in providing information or in having an examination. At any point in the eight-step evaluation process, if the second exception applies, benefits are terminated. If no exceptions apply, disability benefits are continued. At step six, when either the first exception applies or MI is determined to be related to the ability to work, the DDS determines whether the beneficiary’s current impairment is severe. According to SSA standards, a severe impairment is one that significantly limits a person’s ability to do basic work activities, such as standing, walking, speaking, understanding and carrying out simple instructions, using judgment, responding appropriately to supervision, and dealing with change. If the DDS determines that the impairment is not severe, benefits are terminated. At step seven, for beneficiaries with severe impairments, the DDS determines whether the beneficiary can still perform work he or she has done in the past. This determination is based on an assessment of the beneficiary’s current RFC. If the person is found to be able to do past work, benefits are terminated. At step eight, for beneficiaries found unable to perform work done in the past, the DDS determines whether the beneficiary can do other work that exists in the national economy. Using SSA guidelines, the DDS considers the person’s age, education, vocational skills, and RFC to determine what other work, if any, the beneficiary can perform. Unless the DDS concludes that the person can perform work that exists in the national economy, benefits are continued. Mailer CDRs enable SSA to conduct more CDRs without performing labor-intensive full medical reviews. The mailer CDR is a questionnaire through which a beneficiary provides information about health, medical care, work history, and training (see fig. II.2 for the questionnaire currently used). Currently, SSA sends mailer CDRs to a portion of beneficiaries with the lowest estimated likelihood of benefit termination. In conjunction with data on the beneficiaries’ impairment, age, and other characteristics, SSA uses responses to mailer CDRs to help identify those beneficiaries most likely to have medically improved who thus should receive full medical reviews. For example, if the beneficiary indicates that his or her health is better, SSA will generally conduct a full medical CDR. In mental impairment cases, SSA may decide that a full medical CDR is unwarranted even if the beneficiary reports MI. If, however, the beneficiary indicates that his or her health is the same or worse, SSA then reviews the beneficiary’s response to the next question on whether, within the last 2 years, a doctor has indicated that the person can return to work. On the basis of the beneficiary’s responses to the CDR mailer and characteristics, SSA assesses the potential effects of any hospitalizations or surgeries on the beneficiary’s health status and the importance of ongoing medical treatment or its absence to the beneficiary’s health condition. If necessary, SSA will contact the beneficiary for additional information or clarification. If SSA’s analysis indicates possible MI, the beneficiary is referred for a full medical CDR. Otherwise, the beneficiary is rescheduled for a future CDR. Not available. Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination 14,212 (continued) Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination 2 (continued) Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Workers 60 years and older (continued) Workers 60 years and older (continued) Workers 60 years and older SSA does not estimate the likelihood of benefit termination for MIE and MIP workers aged 60 and over or for MINE workers. Therefore, the total number with an estimated likelihood of benefit termination is less than the total for the column. Workers 60 years and older (continued) Workers 60 years and older (continued) Workers 60 years and older SSA does not estimate the likelihood of benefit termination for MIE and MIP workers aged 60 and over or for MINE workers. Therefore, the total number with an estimated likelihood of benefit termination is less than the total for the column. Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 201,148 5-24% 25-49% 50-74% 5-24% 25-49% 50-74% (continued) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 0 5-24% 25-49% 50-74% 5-24% 25-49% 50-74% (continued) Number of years receiving benefits Average years (mean) Average years (median) Matured over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 3,820 (continued) Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) Largest sampling error in column at the 95-percent confidence level Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders 0 (continued) Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) SSA does not estimate the likelihood of benefit termination for MIEs and MIPs aged 60 and over. Therefore, the total with an estimated likelihood of benefit termination is less than the total number for the column. Furthermore, SSA does not estimate the likelihood of benefit termination for children or MINEs. Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation (continued) Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) Number of years receiving benefits Average years (mean) Average years (median) (continued) Due over 10 years ago Average years (mean) Average years (median) Largest sampling error in column at the 95-percent confidence level (continued) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders Estimated likelihood of benefit termination Average likelihood (mean) Average likelihood (median) (continued) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation Skin and subcutaneous tissue disorders (continued) Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago Average years (mean) Average years (median) (Table notes on next page) Largest sampling error in column at the 95-percent confidence level Average age (mean) Average age (median) Endocrine, nutritional, and metabolic diseases Disorders of blood and blood-forming organs Mental disorders, excluding mental retardation (continued) Skin and subcutaneous tissue disorders Number of years receiving benefits Average years (mean) Average years (median) Due over 10 years ago (continued) Average years (mean) Average years (median) The following are GAO’s comments on the Social Security Administration’s letter dated September 23, 1996. 1. When SSA considers legislative changes that would make the CDR process more cost-effective, we believe that it must reassess the requirements of the existing schedule for conducting CDRs. According to SSA officials, if an initial CDR finds that a beneficiary is still disabled, subsequent CDRs are likely to result in the same conclusion. We question whether additional CDRs for that beneficiary are appropriate or cost-effective. Similarly, predictive formulas for DI worker beneficiaries allow SSA to determine those workers most likely to medically improve. Other groups not now included in the selection process may yield additional groups that are cost-effective to review. 2. While we recognize that the use of the formulas established the cases that fall into the “middle group,” SSA officials told us that SSA does not know which type of CDR—full medical or mailer—is more appropriate for those beneficiaries. SSA has at least two efforts under way to improve its ability to determine which type of CDR would be the more cost-effective. 3. We agree that SSA is currently testing the feasibility of expanding the use of formulas to the MINEs, and the report states that such an effort is under way. 4. While cost-effectiveness is an important aspect of the CDR process, we also believe that to ensure program integrity, all beneficiaries should have some likelihood of selection for a CDR. Such a program weakness is particularly troubling given that SSA has been unable to conduct all required CDRs for almost a decade and it estimates that the backlog will not be eliminated for another 7 years. 5. Our recommendation provides a comprehensive approach to program management that focuses on cost-effectiveness, program integrity, and increased contact with beneficiaries. Increased beneficiary contact is valuable to remind beneficiaries that their disability status is being monitored and that they are responsible for reporting medical improvement. We believe that such a contact also offers an additional opportunity for SSA to further its program improvement efforts. For example, it could be used to identify medical treating sources that should receive the medical treating source mailer currently under development. 6. We believe that ongoing periodic contact with beneficiaries is essential to a well managed program and should be done even if such an activity is considered a program operating cost. However, in estimating the costs of increased contact with beneficiaries, we considered a number of factors, including administrative and other costs. Because SSA could not provide us with estimates for these costs, we used the cost of the CDR mailer process to approximate the costs. The cost of the mailer reflects a more expensive manual process; thus we believe that it overstates the true cost of a scannable mail contact. In addition, because of the significant cost savings likely to result from the termination of benefits for individuals who do not respond—a net federal savings of over $1.4 billion—we believe that there is sufficient latitude to cover the cost of such an initiative. 7. Given the challenges that SSA faces, we continue to believe that its ability to eliminate the backlog of all required CDRs is uncertain. It may be possible for SSA to conduct the number of CDRs in its plan. However, the plan excludes about 848,000 required CDRs that are currently due or overdue. In addition, it does not include new CDRs and disability eligibility redeterminations required by the 1996 amendments to the Social Security Act, which take precedence over other required CDRs. Additional challenges are cited in our report. 8. We are pleased that SSA agrees with our recommendation to integrate return-to-work initiatives and the CDR process and that SSA has efforts under way to elicit the assistance of federal and private sector partners in the development of a return-to-work strategy. In our report, we acknowledge that field office employees play a limited role in providing information on VR opportunities to beneficiaries when they apply, but we also note that these staff take VR-related actions during a full medical CDR, and that state VR agencies have a role in limiting candidates for rehabilitation. In addition to those named above, the following persons made important contributions to this report: Susan E. Arnold, Senior Evaluator; Christopher C. Crissman, Assistant Director; Julian M. Fogle, Senior Evaluator; Elizabeth A. Olivarez, Evaluator; Susan K. Riggio, Evaluator; Vanessa R. Taylor, Senior Evaluator (Computer Science); and Ann T. Walker, Evaluator (Database Manager). Supplemental Security Income: Some Recipients Transfer Valuable Resources to Qualify for Benefits (GAO/HEHS-96-79, Apr. 30, 1996). SSA Disability: Program Redesign Necessary to Encourage Return to Work (GAO/HEHS-96-62, Apr. 24, 1996). PASS Program: SSA Work Incentives for Disabled Beneficiaries Poorly Managed (GAO/HEHS-96-51, Feb. 28, 1996). SSA’s Rehabilitation Programs (GAO/HEHS-95-253R, Sept. 7, 1995). Supplemental Security Income: Disability Program Vulnerable to Applicant Fraud When Middlemen Are Used (GAO/HEHS-95-116, Aug. 31, 1995). Social Security Disability: Management Action and Program Redesign Needed to Address Long-Standing Problems (GAO/T-HEHS-95-233, Aug. 3, 1995). Supplemental Security Income: Growth and Changes in Recipient Population Call for Reexamining Program (GAO/HEHS-95-137, July 7, 1995). Disability Insurance: Broader Management Focus Needed to Better Control Caseload (GAO/T-HEHS-95-164, May 23, 1995). Supplemental Security Income: Recipient Population Has Changed as Caseloads Have Burgeoned (GAO/T-HEHS-95-120, Mar. 27, 1995). Social Security: Federal Disability Programs Face Major Issues (GAO/T-HEHS-95-97, Mar. 2, 1995). Supplemental Security Income: Recent Growth in the Rolls Raises Fundamental Program Concerns (GAO/T-HEHS-95-67, Jan. 27, 1995). Social Security: Rapid Rise in Children on SSI Disability Rolls Follows New Regulations (GAO/HEHS-94-225, Sept. 9, 1994). Social Security: New Continuing Disability Review Process Could Be Enhanced (GAO/HEHS-94-118, June 27, 1994). Disability Benefits for Addicts (GAO/HEHS-94-178R, June 8, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on how to improve the Social Security Administration's (SSA) continuing disability reviews (CDR) process for Disability Insurance (DI) and Supplemental Security Income (SSI) beneficiaries, focusing on: (1) the number and characteristics of individuals who are due for CDR; (2) how SSA selects individuals for and conducts CDR; (3) whether available resources are adequate for conducting required CDR; and (4) potential options for improving the CDR process. GAO found that: (1) about 4.3 million DI and SSI beneficiaries are due or overdue for CDR in fiscal year 1996; (2) SSA selects beneficiaries for CDR on the basis of the likelihood that their benefits will be terminated; (3) SSA plans to improve its CDR selection process by obtaining Medicare and Medicaid data and mailing questionnaires to beneficiaries' physicians; (4) funding for CDR could exceed $4 billion by 2002; (5) SSA must incorporate additional CDR required by legislation into the agency's workload and conduct CDR for beneficiaries whose CDR were previously done at the agency's discretion; (6) SSA should conduct CDR on a random sample of beneficiaries normally excluded from the selection process to improve program integrity; (7) SSA proposal for time-limited benefits may increase the agency's workload when beneficiaries who are terminated from the program reapply for benefits; (8) the formula used by SSA to select beneficiaries for CDR excludes approximately half of those who are due or overdue for CDR; and (9) SSA could utilize CDR to strengthen its return-to-work initiatives. |
APHIS is the lead federal agency for preventing infestations of harmful foreign pests and diseases, protecting U.S. agriculture, and preserving the marketability of agricultural products in the United States and abroad. The agency’s Plant Protection and Quarantine unit (PPQ) exercises regulatory authority to inspect agricultural imports, as well as nonagricultural products that may carry pests, largely through its Agricultural Quarantine Inspection (AQI) activities. In fiscal year 1996, APHIS allocated an estimated $151.9 million for AQI activities and had about 2,600 inspectors located at 172 land, sea, and air ports of entry. APHIS has other inspection duties, such as inspections of imported and exported live animals, that are not the subject of this report. APHIS is one of the three primary Federal Inspection Service (FIS) agencies responsible for monitoring the entry of cargo and passengers into the United States. The two other FIS agencies are the U.S. Customs Service in the Department of the Treasury and the Immigration and Naturalization Service (INS) in the Department of Justice. The U.S. Customs Service is primarily concerned with collecting duties on imports, enforcing antismuggling laws, and interdicting narcotics and drugs. INS inspects foreign visitors to determine their admissibility into the United States and guards against illegal entry. Recent multilateral trade agreements—the North American Free Trade Agreement (NAFTA) and the results of the General Agreement on Tariffs and Trade’s Uruguay Round of Multilateral Trade Negotiations (Uruguay Round)—have provisions that affect APHIS’ inspection activities. Both agreements contain provisions on signatories’ use of sanitary and phytosanitary standards that limit the introduction of foreign pests and diseases. To prevent the standards from impeding agricultural trade, they must be based on scientific principles and risk assessment, provide a level of protection appropriate to the risk faced, and not restrict trade more than necessary. APHIS’ inspection workload has increased dramatically since 1990 because of growth in imports and exports, increased travel, and increased smuggling. Furthermore, policy changes have exacerbated workload demands by increasing pressure to expedite the processing of passengers and cargo into the United States. The workload has been directly affected by the increase in international trade and travel between fiscal years 1990 and 1995. Overall, the volume of exports and imports rose 45 percent and 52 percent, respectively, while agricultural exports and imports increased 35 percent and 31 percent, respectively. Moreover, the number of international passengers traveling to the United States increased almost 50 percent, reaching 55 million passengers in fiscal year 1995. Furthermore, increases in the number of ports of entry, as well as increased risk at existing ports, have expanded APHIS’ workload. Along the Mexican border alone, six new border stations were approved between 1988 and 1993, while several other major facilities are scheduled for expansion. According to APHIS officials, each new port of entry requires at least five inspectors. Along the U.S.-Canadian border, changes in risks associated with passengers and cargo have created the need for increased inspections. APHIS staff at the Blaine, Washington, port told us that increased risks were responsible for an increase from 4 inspectors in 1990 to 18 in 1996. In addition to conducting inspections, inspectors are responsible for reviewing and issuing certificates for agricultural exports, working on temporary assignments away from their normal work location, and performing other duties, such as preventing smuggling and fumigating cargo. As exports increase, inspectors have had to issue and review a growing number of certificates for U.S. exports. Temporary duty assignments range from domestic emergency eradication of pests and diseases and foreign preclearance activities to meetings and training. Studies in California and Florida have found that the smuggling of agricultural products into the United States has grown and presents a serious pest risk. As a result of increased smuggling activity across the Canadian and Mexican borders, APHIS inspectors are performing antismuggling activities, such as working on investigations and surveillance of markets and border areas. Along with the greater inspection workload, inspectors face increasing pressure to expedite the flow of goods and people across U.S. borders. Responding to the growing importance of trade to the national economy and to recent trade agreements, APHIS has taken an active role in facilitating trade. Towards this end, APHIS and its FIS partners have adopted new customer service standards to move the increasing import and passenger volume through ports of entry within specific periods. For passengers, these standards call upon the agencies to clear international airline passengers within 45 minutes of arrival. Similarly, APHIS has adopted standards to schedule inspections of perishable cargo within 3 hours of being notified of its arrival. APHIS acknowledges the conflict between enforcement responsibilities and trade facilitation and is seeking an appropriate balance as guidance for the inspection program. APHIS made a number of changes to its inspection program to respond to the demands of its growing workload. It shifted funds and staff away from other programs to the inspection program, broadened the range of inspection techniques, and stepped up efforts to coordinate with the other FIS agencies. In addition, to help measure the effectiveness of its inspections and to form a basis for making further improvements, APHIS recently initiated an effort to compare the rate at which restricted items are entering the United States, and the risks associated with those items, with the inspection rates at individual ports of entry. This effort is designed to determine if the current inspection program is adequately addressing the risks of harmful pests and disease entering the country and to identify which of the country’s ports of entry are most vulnerable to such risks. APHIS has been shifting more funds into inspection activities since fiscal year 1990. Through fiscal year 1996, the budget for AQI activities rose 78 percent to $151.9 million, while APHIS’ overall funding rose 20 percent. To provide this increased funding, APHIS reduced its spending for several other programs, such as the brucellosis eradication program, which fell from $59 million in 1990 to $23 million in 1996. The 1990 and 1996 farm bills also authorized the collection of and expanded access to user fees for inspections. User fees have become the principal revenue source for the AQI program, accounting for about $127 million of program revenues in fiscal year 1996. (See app. II for more detail on funding and staffing for fiscal years 1990-96.) Since 1990, APHIS has raised AQI staffing levels about 44 percent—from 1,785 to 2,570 positions. The agency shifted positions from other programs to meet the increased workload. In addition, as a result of the 1996 farm bill’s provisions allowing greater access to user fee revenues and removing a staff ceiling, APHIS is in the process of hiring about 200 new inspectors. APHIS has taken several steps to make better use of its inspection resources. To supplement the normal practice of performing visual inspections of selected cargo and baggage, APHIS has significantly expanded the use of alternative inspection practices, such as detector dogs and x-ray equipment. APHIS increased the number of detector dog teams from 12 in 1989 to 48 in 1996. Inspectors are also periodically using inspection blitzes—highly intensive inspections of baggage or cargo—to augment their visual inspection of selected items. To improve its ability to select passengers for inspection, APHIS is refining the list of risk characteristics that inspectors use in selecting passenger bags for inspection. Roving inspectors currently use these selection characteristics in airports to make referrals for agricultural inspection. The agency is also studying opportunities to use roving inspectors at land border ports. Finally, APHIS is funding research on new x-ray technology that will identify air passengers’ baggage containing restricted items. APHIS has also attempted to reduce the workload at entry ports by (1) inspecting passengers and products in the country of origin or (2) allowing lower-risk products to enter with less intensive scrutiny. Under the first effort, APHIS has staff oversee or conduct inspections to preclear products and passenger baggage in the country of origin so that inspectors at receiving U.S. ports primarily monitor these products or baggage. APHIS’ International Services unit now operates cargo preclearance inspections in 29 countries and limited passenger preclearance programs in 2 countries. In addition, APHIS initiated a cargo release program along the Mexican border to reduce inspections of high-volume, low-risk commodities and allow the products to enter with less intensive scrutiny. For example, according to APHIS, the port of entry with the highest volume of agricultural imports from Mexico—Nogales, Arizona—had about 75 percent of its shipments in 1995 in the cargo release program. In addition to taking steps aimed at improving the use of its own resources, APHIS is working with the other FIS agencies—Customs and INS—to improve coordination. For example, several work units are working with the FIS agencies, through Port Quality Improvement Committees, to improve port operations and are cross-training FIS staff to educate them on APHIS’ inspection needs. In 1996, the FIS agencies and the Department of State issued a report with recommendations for improving screening of passengers as they arrive at U.S. borders. In 1996, APHIS began providing computer equipment to 33 maritime ports and 26 airports to enable them to link up with information in Custom’s databases on cargo and prior violations. APHIS is trying to improve the linkage with the cargo manifest database to overcome early problems in obtaining and reviewing cargo information. For example, APHIS is developing its Automated Targeting System, which will automatically scan Custom’s cargo manifest database to identify shipments for inspection. In October 1996, APHIS began implementing the AQI Results Monitoring Program, which is intended to measure the effectiveness of its inspections nationwide and provide information on which ports of entry pose the highest risk of having harmful pests and diseases enter the country. At each port, the program will also identify risks of harmful pest and disease entry associated with various commodities, their country of origin, and their means of entry. APHIS expects the program to be in place at most ports of entry by September 1997. The results monitoring program uses random surveys of cargo and passengers entering the United States to estimate the rates at which restricted items are entering the country and the risks of harmful pests and diseases associated with those items. The program allows APHIS to determine whether the number of inspections performed at a given location for a given commodity adequately address the risk posed. The program replaces the traditional measure of inspection performance, the quantity of material intercepted, with new performance indicators related to risks associated with commodities entering the country. This approach will enable APHIS to modify its inspection program to reduce the threat of harmful pests while not unduly restricting trade. Despite the changes in resources and activities, APHIS’ inspection program at most of the ports we visited has not kept pace with the increasing pressure from its growing workload and mission. Heavy workloads have often led APHIS inspectors to shortcut cargo inspection procedures, thereby jeopardizing the quality of the inspections conducted. Furthermore, APHIS has little assurance that it is deploying its limited inspection resources efficiently and effectively because of weaknesses in the staffing models it uses for making such decisions. APHIS’ inspectors are to follow certain procedures when examining goods and passengers entering the United States in order to minimize the possibility of pest infestation and disease. However, at 11 of the 12 ports that we examined, inspectors were not always implementing these procedures for the (1) number of inspections that should be conducted, (2) number of samples of a shipment that should be examined, or (3) manner in which a sample should be selected. According to regional APHIS officials and internal studies, these types of problems may not be limited to the sites we visited. At 11 ports of entry we visited, including the 3 busiest ports in the United States, inspectors said that they are unable to examine enough vehicles or cargo containers to consider their inspection to be representative of the movement of goods, to control the flow of restricted goods, and to minimize risk of pests and disease. Several of these inspectors said that they were not confident that the frequency of inspections was adequate to manage the risks. For example: At the Mexican border crossing with the heaviest passenger vehicle volume in the country, a supervisory inspector said the staff were inspecting less than 0.1 percent of the passenger vehicular traffic because of the high volume of traffic and the low number of referrals from FIS officials who initially screen the vehicles. APHIS officials have set a target of inspecting about 2 percent of all passenger vehicles. Because of staffing shortages, one work unit along the U.S.-Mexican border can provide inspector coverage of a busy pedestrian crossing for only 8 of the 18 hours of port operations. As a result of a low staffing level and the numerous other duties that must be carried out at a busy U.S.-Canadian border location, an APHIS manager told us that inspectors cannot maintain a regular presence at any of the four border crossings at the port. The inspectors are available to inspect only when the other FIS agencies make referrals to APHIS. Problems in conducting a sufficient number of inspections were not limited to the locations we visited. An APHIS headquarters official told us the agency does not conduct any inspections at 46 northern and 6 southern ports of entry. Instead, the agency relies on the other FIS agencies to perform agricultural inspections, when needed, at these low-volume ports, although the risks are unknown. In addition, even for the inspections that they conduct, inspectors do not always examine the number of samples suggested by the guidance. For example, inspectors at two ports of entry told us that they were unable to inspect a large enough sample in a given cargo shipment to meet APHIS’ inspection guidance. More specifically, during peak season at one high-volume port along the southern border, inspectors usually inspected one box from each shipment selected for inspection, or less than 0.5 percent of the shipment. This is far less than the 2-percent sample recommended in APHIS’ guidance. At another port—the second largest in the country—inspectors curtailed their inspections of cut flowers, which are considered a high-risk cargo. The APHIS port director said that inspectors are able to conduct only cursory inspections during high-volume periods because the flowers are perishable and the cut flower industry has continually pressured both political representatives and APHIS to have inspections performed more quickly. Finally, in contrast to recommended inspection procedures, APHIS inspectors do not always select samples in a manner that ensures that the samples are representative of the shipment being inspected. APHIS’ guidance emphasizes the importance of selecting representative samples and specifically cautions against “tailgate inspections”—inspections of goods that are stored near openings and that may not be representative. A random survey of refrigerated cargo containers in Miami, conducted by APHIS and the state of Florida, documented the pitfalls of such inspections. The survey found that less than 40 percent of the pests discovered in the survey were located near the container opening. Despite the limitations associated with tailgate inspections, inspectors at five ports said they routinely use them in inspecting cargo containers. This practice extends beyond the ports we visited: A 1996 APHIS report on cargo inspection monitoring noted that many ports have resorted to tailgate inspections because of heavy trade volume. In addition to tailgate inspections, we found one port using another sampling practice that also reduced assurance that the samples examined represented the entire shipment. In Miami, the second busiest port in the country, we observed inspectors allowing import brokers of cut flowers to select samples for inspection. With this practice, brokers could select samples that are likely to pass inspection, which reduces the credibility of the inspection. The staffing models that APHIS uses to allocate its inspection resources have several weaknesses that undermine the agency’s ability to ensure that inspectors are deployed to areas that pose the highest risk of entry of pests or disease. The weaknesses fall in three areas. First, the staffing models rely on inaccurate inspection workload data, which could skew the models’ analyses. Second, the models do not contain risk assessment information similar to that produced by the results monitoring program because APHIS has not determined how to include risk data in the model’s design. This limitation restricts APHIS’ ability to place inspection resources at the ports of entry with the highest risks of pest and disease introduction. Finally, the models are not used to allocate inspection resources on a national basis. Rather, they are used only to allocate resources within APHIS regions. APHIS’ staffing models are intended to help determine the number of inspectors that should be stationed at various locations across the country. There are four separate models for calculating staffing needs at airports, land border crossings, maritime ports, and plant inspection stations. Each of the models calculates staffing needs by, in essence, multiplying various measures of workload activity (such as number of inspections, number of vehicle arrivals, and number of pest interceptions) by the time it takes to complete these activities and converting that product into an estimate of the number of inspectors needed. The accuracy of the workload data used in the models is key to ensuring that projected staffing needs are also accurate. However, APHIS has little assurance that the data are accurate. The inspection workload data used in the model generally comes from APHIS’ Workload Accomplishment Data System (WADS). APHIS officials at all levels of the inspection program questioned the accuracy of the data in this system because of inconsistencies in the way the data were compiled at ports and reported through regional offices to APHIS headquarters. APHIS inspectors told us that some data they submitted, such as information on endangered species, was inaccurately reported or did not appear in the national WADS summaries. Officials in one region said some data were omitted because they were not useful at the national level, while inaccurate data may be due to data entry error. Furthermore, workload statistics were often estimates of activity rather than real-time information. Finally, we found that another source of inaccurate data in WADS can be traced to the poor quality of inspections. If, for example, inspectors are reporting the results of tailgate inspections rather than inspections of representative samples of cargo, WADS data on the number of interceptions could be misleading. A second weakness with the current staffing models is that they do not take into consideration variations in the risks of harmful pests and disease entering the country. These risks can vary by several factors, such as the commodity, country of origin, port of entry and means of entry. The results monitoring program may be able to provide this type of analyses. However, APHIS officials have not yet determined how to incorporate this information into the models. Furthermore, there are some concerns about the accuracy of the results monitoring program because it too is based, in part, upon information from the WADS. Finally, the potential benefits of using the staffing models are limited because they are not used to allocate inspection resources on a national level. APHIS has instructed its regions and ports to use the staffing models to help allocate staff at the regional and port levels. However, regional officials at two of the four regions told us that they use the staffing models primarily for budget development, not for allocating staff among the ports within their regions. APHIS faces a difficult mission—to ensure that tons of cargo and millions of passengers entering the United States do not bring in harmful pests or diseases. Its mission will only become more difficult as the volume of trade increases and the pressure to facilitate trade through expedited inspections becomes greater. In the ports we visited—which included the country’s three busiest ports of entry—APHIS inspectors are struggling to meet these challenging work demands. Unfortunately, these demands have sometimes resulted in shortcutting inspection procedures, such as performing tailgate inspections and allowing brokers to choose the samples for inspection. In turn, these shortcuts have diminished the quality of inspections and reduced assurance that an APHIS-inspected shipment entering the United States contains no harmful pests or diseases. In view of APHIS’ increasing workload, it is critical that the agency be able to allocate its limited inspection resources to the ports of entry with the highest risks of pest and disease introduction. APHIS currently does not have the management tools to do so. Specifically, the workload information in the WADS is key to staffing allocation decisions. However, APHIS officials question the accuracy of the WADS information, noting, among other things, that the system does not include all needed workload information and some of the information that it does include are estimates that may be inaccurate. Beyond problems with the workload information, APHIS’ current staffing models do not factor into consideration variations by commodity, country of origin, and other factors for the risk of pest or disease introduction. APHIS’ results monitoring program will provide important information on risk. However, APHIS officials have not yet determined how this information will be integrated into their staffing models or staffing decisions. Finally, APHIS has not made a commitment to using its staffing models to allocate inspection resources from a national perspective. Rather, it plans to examine resource allocations only within regions. As a result, APHIS may lack the flexibility for effectively shifting its resources to target them to the highest risks. To better ensure that APHIS identifies harmful pests and diseases through the inspections that it conducts, the Secretary of Agriculture should direct the Administrator of APHIS to issue guidance that emphasizes the need for APHIS inspectors to adhere to minimum inspection standards in terms of the methods used to select samples from shipments chosen for inspection. We recognize that meeting these minimum standards may result in fewer inspections, but we believe that a smaller number of reliable inspections is preferable to a larger number of inspections that do not comply with inspection guidelines. To strengthen APHIS’ ability to allocate its inspection resources more effectively and efficiently, we recommend that the Secretary of Agriculture direct the Administrator of APHIS to develop and implement plans that will improve the reliability of data in the WADS; integrate a risk assessment factor, developed on the basis of the results monitoring program, into its staffing allocation process; and position APHIS to evaluate inspection resources in terms of national rather than regional needs. We provided a draft of this report to APHIS for its review and comment. Appendix IV contains APHIS’ written response to our draft report. APHIS agreed that the issues identified in each of our four recommendations needed to be addressed and indicated actions under way to address them. For example, to ensure that APHIS inspectors adhere to minimum inspection standards, APHIS said that it will provide guidance to reinforce the importance of using the best possible procedures for preventing pests from becoming established and will ensure that the inspection standards are consistent with the risk determinations conducted through the results monitoring activity. To improve the data in the WADS, APHIS plans to ensure that inspection program policies are consistently applied nationwide and that the data used in decisionmaking are accurate and reliable. To integrate a risk assessment factor into its staffing process, APHIS is developing a prototype model of staffing guidelines to integrate data from its results monitoring and risk assessments. To evaluate inspection resources in terms of national needs, APHIS is consolidating its four PPQ regions into two and believes that this will contribute significantly to achieving national consistency in all APHIS programs. To assess APHIS’ inspection program, we reviewed various studies of pest exclusion efforts and interviewed officials at APHIS headquarters, two regional offices, and work units at 12 ports of entry around the country. At work units, we observed actual inspections; obtained data on workload, operating procedures, and mission; and discussed recent developments and changes to the inspection program. Ports we visited were on the northern and southern borders of the United States and included international airports, seaports, rail yards, and mail stations. We performed our review from May 1996 through March 1997 in accordance with generally accepted government auditing standards. Appendix I provides details on our objectives, scope, and methodology. This report is being sent to congressional committees responsible for U.S. agriculture; the Secretaries of Agriculture and the Treasury; the U.S. Attorney General; the Administrator, APHIS; the Commissioners, U.S. Customs Service and Immigration and Naturalization Service; and other interested parties. We will also make copies available to others on request. Please contact me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix V. The objective of our review was to assess the effectiveness of the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) in minimizing the risks to agriculture from pests and diseases entering the United States. Specifically, we (1) identified recent developments that challenge the Agricultural Quarantine and Inspection (AQI) program’s resources and ability to carry out its mission, (2) reviewed APHIS’ efforts to cope with these developments, and (3) reviewed the effectiveness of the inspection program in keeping pace with workload changes. We conducted our review at APHIS headquarters, two regional offices, and work units at 12 ports of entry located in the four APHIS regions responsible for plant inspection programs. APHIS management officials guided our selection of the ports we visited in order to ensure that these locations were representative of the challenges and problems faced by APHIS inspectors at all 172 staffed ports of entry. Ports we visited were on the northern and southern borders of the United States and included international airports, seaports, rail yards, and mail stations. Table I.1 lists the work units that we visited. To identify recent developments affecting the inspection program’s workload and mission, we reviewed statistical reports on agricultural imports and exports and international air passenger arrivals from 1990 through 1995. We also reviewed reports prepared by APHIS and state agriculture agencies on trends in workload volume and changes in pest risk. APHIS provided data on the cost of foreign pest and disease infestations to U.S. agriculture, but we did not verify the accuracy of the data or the methodology used. At the ports of entry we visited, we discussed changes in the volume and complexity of the port’s workload and analyzed data on the number of phytosanitary export certificates issued by the inspection staff. We also contacted APHIS’ regulatory enforcement officials who analyze trends in smuggling agricultural goods into the United States. We identified increases in ports of entry by reviewing reports from the General Services Administration (GSA) and discussing these increases with GSA headquarters officials. To assess changes in APHIS’ mission, we reviewed APHIS’ mission statements, internal reports, and organizational initiatives. At all locations, we discussed with officials the impact of recent trade agreements or other developments on APHIS’ workload and mission. To review the changes APHIS has made to cope with recent developments, we identified changes in resource allocations to the AQI program by reviewing APHIS’ budget and staffing documents for 1990 through 1996 and reports on user fees. We discussed with APHIS officials (1) shifts in staffing and funding, (2) programs used to reduce the inspection workload at U.S. ports of entry, (3) program priorities, (4) the implementation and use of the results monitoring program and staffing models, and (5) inspection coordination with the other Federal Inspection Service (FIS) agencies. We analyzed data on inspection techniques and technologies and discussed the use of various techniques with APHIS officials at all the locations we visited. At several ports of entry, we observed the use of x-ray equipment and detector dogs in inspections. We discussed border cargo release programs with APHIS field staff at U.S.-Mexican border ports we visited and preclearance programs with officials from the APHIS International Services unit. To evaluate the overall effectiveness of the inspection program, we reviewed inspection manuals and discussed policies, procedures, and requirements with APHIS headquarters officials. At the ports of entry we visited, we discussed with port directors, supervisors, and inspectors how inspections are conducted and how they could be improved. We also reviewed studies and documents on various APHIS and FIS initiatives aimed at improving inspections and discussed these initiatives with officials at the locations we visited. Additionally, we observed inspections for various modes of entry into the United States—airport cargo and arriving international air passengers; pedestrians, vehicle and bus passengers, and truck cargo at land border crossings; maritime cargo and ships at seaports; rail cars and rail passengers; and international mail stations. We performed our review from May 1996 through March 1997 in accordance with generally accepted government auditing standards. APHIS significantly increased its funding and staffing for the AQI program in the 1990s in an effort to keep pace with growing workload demands. APHIS’ funding for the program rose by 78 percent from fiscal year 1990 through 1996. Figure II.1 lists the funding allocations APHIS made for the inspection program for fiscal years 1990-96. Inspection staffing levels rose about 44 percent from fiscal year 1990 through 1996. Figure II.2 lists the authorized staffing levels for inspection activities. The AQI program is APHIS’ first line of defense in protecting U.S. agriculture from harmful pests and diseases. To implement the inspection program, APHIS has prepared manuals to guide inspections of commercial shipments and passengers and developed an array of inspection techniques. These manuals show that a reliable and credible cargo inspection program requires an adequate number of inspections and the selection of individual inspection samples that are representative of whole shipments. Procedures for inspecting commercial shipments vary according to such factors as the type of product, risk levels associated with the product, and country of origin. Detecting the presence of plant pests or contaminants in a commercial shipment is predicated on inspecting a sample of the shipment. The procedures include guidance for ensuring that the sample is representative of the whole shipment. Inspection procedures for pedestrians, passengers, and passenger vehicles follow a two-stage process, primary and secondary inspection. Primary inspection involves screening passengers, their baggage, and vehicles by questioning the passengers, reviewing their written declaration, and visually observing for referral for further examination. APHIS is refining the characteristics used in the screening process to select passengers and baggage for secondary inspection. Secondary inspection involves a more detailed questioning of the passenger and a visual examination of baggage contents, if necessary. To detect pests and contraband, AQI staff use a range of strategies, such as screening, detector dogs, and x-rays. For airline flights, APHIS has also developed a list of low-, medium-, and high-risk countries of origin to help guide the selection process in the primary inspection area. Ron E. Wood, Assistant Director Dennis Richards Mary K. Colgrove-Stone Michael J. Rahl Jonathan M. Silverman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Animal and Plant Health Inspection Service's (APHIS) efforts to minimize the risks to agriculture from pests and diseases entering the United States, focusing on: (1) recent developments that could challenge the ability of APHIS' Agricultural Quarantine and Inspection program to carry out its mission; (2) APHIS' efforts to cope with these developments; and (3) the effectiveness of the inspection program in keeping pace with workload changes. GAO noted that: (1) several developments are challenging APHIS' ability to effectively manage its inspection program; (2) key among these is the rapid growth in international trade and travel since 1990, which has dramatically increased the amount of cargo and the number of passengers that inspectors are to examine; (3) in addition, policy changes that emphasize facilitating trade and customer service have put pressure on APHIS to carry out its increased inspection responsibilities more quickly in order to speed the flow of passengers and trade; (4) APHIS has taken several steps to cope with these developments; (5) it increased funding and staffing for inspections by about 78 percent and 44 percent, respectively, from fiscal year (FY) 1990 to 1996; (6) the agency has attempted to improve the efficiency and effectiveness of its inspections by: (a) using other inspection techniques in addition to visual inspections, such as x-ray technology and detector dogs, to pinpoint prohibited agricultural products, such as untreated fruits, vegetables, and meats from countries that present a higher risk for pests and diseases; and (b) coordinating with other Federal Inspection Service agencies to maximize inspection activities; (7) APHIS began implementing its results monitoring program in FY 1997 to better understand which ports of entry and commodities pose the highest risks of entry for harmful pests and disease; (8) despite these changes, inspectors at the ports GAO visited are struggling to keep pace with increased workloads; (9) heavy workloads have led to inspection shortcuts, which raise questions about the efficiency and overall effectiveness of these inspections; (10) on a broader scale, APHIS' efforts to address its workload problems are hampered by inadequate information for determining how to best deploy its inspectors; (11) in particular, its current staffing models, mathematical formulas used to help determine inspection staffing needs, are not based on reliable information and do not incorporate risk assessment factors similar to those being developed in its results monitoring program; and (12) consequently, APHIS has little assurance that it is deploying its limited inspection resources at the nation's ports of entry that are most vulnerable to the introduction of pests and diseases. |
WIOA required federal and state officials to build a new framework for the workforce system over the two years following its enactment. This framework included the federal regulations implementing the law, and state plans outlining each state’s overall strategy for workforce development and how that strategy will meet identified skill needs for job seekers and employers. All states submitted their first plans under WIOA to DOL and Education by April 1, 2016, and the federal agencies issued final regulations in June 2016. Although final regulations were not issued by the time states submitted their plans, the federal agencies had previously issued proposed regulations and guidance, and had provided technical assistance to states. According to DOL and Education officials, they approved all of the state plans with conditions that each state needed to address to meet requirements. In July 2016, these plans and the common performance measures for the six core programs took effect. (See fig.1.) WIOA requires that states submit unified state plans to DOL every four years and revisit these plans every two years, submitting their planned modifications to the relevant federal agencies for approval. Among other changes, WIOA requires that state workforce development boards, consisting of the governor, representatives from business and labor communities, and various state officials, assist in the development of “unified” state plans covering six core programs. The six programs are administered by DOL and Education (see table 1). States were required to submit either unified or combined plans for implementing WIOA. Unified plans include planning for the six core programs. Such plans were optional under the prior authorizing law, the Workforce Investment Act of 1998 (WIA). Combined plans include planning for the six core programs, as well as planning for one or more additional programs and activities. In these plans, states are required to include career pathways strategies, which help job seekers obtain employment or education, and sector partnership strategies, which engage employers in the workforce system. States are also required to identify regions that will implement regional planning strategies to coordinate local services, among other activities. Career pathways strategies align and integrate education, job training, counseling, and support services to help individuals obtain postsecondary education credentials and employment in in-demand occupations. WIOA defines in-demand occupations as those that currently have or are projected to have enough positions in an industry sector to have a significant impact on the state, regional, or local economy. Industry or sector partnership strategies organize multiple employers and key stakeholders, such as education and training programs, in a particular industry into a working group that focuses on the shared goals and human resources needs of that industry. Regional planning strategies allow for strategic planning across local workforce area boundaries for the purposes of coordinating services and administrative cost-sharing, among other activities. Career pathways and sector partnership strategies are often interconnected. In order to design career pathways programs that train workers for jobs that employers need to fill, these programs need input from employers, which can be gained through sector partnerships. DOL has noted that one of the key elements of developing a comprehensive career pathways system is to identify industry sectors and engage employers. With respect to the requirement that states establish regions, DOL regulations state that the purpose of these regions is to align workforce development activities and resources with larger regional economic development areas and available resources to provide coordinated and efficient services to both job seekers and employers. This is a change from WIA, which permitted but did not require regional planning. Under WIOA, states must designate local workforce areas, draw regional boundaries, and assign local areas to these regions. In practice, states need to designate local areas before they can assign them to regions. If local areas dispute their local area designations, they may appeal them to the state board, and if the state board denies the appeal, they may appeal to DOL. In assigning local areas to regions, states are required to consult local officials and consider labor market and economic development areas. After states establish regions, local boards are to work together to develop regional plans that incorporate local plans for each local area in the region, collect and analyze regional labor market data, develop regional sector partnerships, and coordinate the sharing of administrative costs, among other responsibilities. In our previous work on leading strategic planning practices, we have noted the importance of stakeholder involvement, assessing the environment, and aligning activities, processes, and resources to meet goals and objectives during the early phases of strategic planning. Under WIOA, states are required to address all of these aspects of strategic planning: Involving stakeholders. WIOA requires that state boards have diverse membership, which helps involve relevant stakeholders in the planning process. WIOA also features a “sunshine provision” to disclose board proceedings and information regarding the state plan. In addition, as previously noted, states must consult local officials when establishing regions. Assessing the environment. The state plan must assess the current environment in which employment and training programs operate by including analyses of state economic conditions and labor market information, among others. Aligning activities and resources. The state plan must include a strategy for aligning the core programs, other state or local partner programs, and other resources available to the state to achieve its strategic vision and goals. The five states we selected as case studies reported using three main approaches to develop plans for implementing WIOA provisions related to career pathways, sector partnerships, and regional planning. Specifically, states 1) built on their prior experience with career pathways and sector partnerships, which officials viewed as interconnected, 2) increased the involvement of stakeholders in the planning process compared to past efforts, and 3) used multiple sources of labor market information (LMI) to identify employer needs and draw the boundaries of the new regions required under WIOA. Officials in each of the five states told us that they intended their state plans to provide a high-level vision that allows regions and local areas the flexibility to develop workforce strategies that respond to specific regional and local needs. Each of the five selected states had experience with career pathways, sector partnerships, or both prior to the enactment of WIOA, and officials in these states told us that WIOA provided an opportunity to enhance the strategies they already had in place. In contrast, each of the states had less prior experience with regional planning. Colorado officials said they began implementing sector partnerships in 2005 and have incorporated lessons learned over time, including ensuring that these partnerships are led by the business community. Under WIOA, Colorado officials plan to expand and enhance their network of business-led sector partnerships. They also plan to use these partnerships to help design career pathways programs by identifying the knowledge, skills, and abilities that employers are seeking for in-demand jobs. Similarly, Pennsylvania’s state plan proposes building on over 10 years of experience with sector partnerships by increasing technical assistance and seeking additional funding for these partnerships, and exploring the development of a certification program for such partnerships. In Ohio, officials also said they plan to further develop their existing sector partnership efforts, including identifying and disseminating best practices from six pilot partnerships that they funded with grants in 2014. They also plan to implement two new career pathways initiatives: an initiative to encourage greater enrollment of Temporary Assistance for Needy Families (TANF) participants in employment and training services, and a new wage pathway model that provides an alternative to the traditional career pathway model. According to Ohio officials, this wage pathway model provides low-income, low-skill youth who are not yet ready to enter a career pathway program with on-the-job skills training and supportive services to help them make a successful transition to a career pathway program and obtain an in-demand occupation. Officials in four of the five states said their previous state plans under WIA served as the foundation for their WIOA plans because the new law supported their existing strategies. For example, California officials said that WIOA’s shift in emphasis from matching job seekers with open positions to helping job seekers increase their skills and education aligns with the state board’s efforts to promote these goals in California over the last five years. Similarly, Ohio officials told us that some of their efforts to reform their workforce system over the last several years have emphasized the development and expansion of career pathways and sector partnerships, which aligns with WIOA’s emphasis on these strategies. In contrast to career pathways and sector partnerships, officials in only three of the five states reported they had some prior experience with regional planning. In California, Colorado, and Ohio, officials said they had some type of regional structure in place prior to the enactment of WIOA, while officials in Kentucky and Pennsylvania said they were relatively new to regional planning. Overall, officials in each of the five states said they had less prior experience with regional planning than with career pathways or sector partnership strategies, and told us that they have made or will make major changes to implement new regional requirements under WIOA. In each of the five selected states, officials reported involving a wide range of stakeholders in enhancing or developing career pathways, sector partnerships, or regional planning strategies, and sought their input in various ways. For example, officials said they included stakeholders such as state agencies overseeing core programs, state agencies overseeing other partner programs, local workforce boards, community colleges, business and industry representatives, labor organizations, and community-based organizations. According to officials, state workforce boards sought input from stakeholders in various ways, such as organizing cross-program or issue area planning committees or discussion groups, holding statewide or regional meetings, and conducting focus groups or surveys. Compared to past planning efforts, each of the five states reported that they have increased stakeholder involvement in WIOA planning by engaging existing stakeholders to a greater extent, and four of the five states also reported involving new stakeholders. Each of the five states included the six core programs in their plans, as required by WIOA, and four states developed combined plans that also included other partner programs (see table 2). According to state officials, some of these core or other partner programs were involved in previous workforce planning efforts but had greater involvement in the WIOA planning process, while others were new stakeholders under WIOA. For example, officials in two states said they involved the state agency overseeing the Vocational Rehabilitation program to a greater extent under WIOA, and officials in the other three states said that the WIOA planning process was the first time they involved this agency in workforce planning efforts. Beyond the core and other partner programs included in the state plan, officials in four of the five states reported involving other new stakeholders in WIOA planning. While they varied by state, these new stakeholders included state agencies such as Kentucky Adult Education and the Department of Corrections and Rehabilitation in California, as well as organizations representing particular interests, such as a local chapter of the National Skills Coalition in Colorado and an initiative supporting better jobs for individuals with disabilities in Ohio. In addition, these new stakeholders included new members of the state workforce development board. For example, in Colorado, officials said they sought new board members with expertise in a variety of areas, including youth and disability issues. In Kentucky, officials told us they focused on identifying new members who could represent the interests of the business community, and that these new members have helped to design career pathways by sharing their knowledge about the skills and certifications needed for in-demand jobs. Officials in two of the five states reported that this increased involvement from stakeholders enhanced awareness about the roles and services of other related programs. For example, California officials said that through the WIOA planning process, state agencies have exchanged detailed information about their programs, including eligibility requirements, the types of services they provide, and how they deliver these services. In Colorado, officials similarly reported that greater stakeholder involvement led to increased awareness across programs and cross-training of staff. Specifically, officials said they cross-trained more than 1,000 staff from a variety of programs on the eligibility requirements for and services provided by other programs to which they refer job seekers, including the core and other partner programs in the state plan, as well as the unemployment insurance program and career and technical education programs, among others. In addition, officials in four of the five states—California, Colorado, Kentucky, and Ohio—reported that involving new stakeholders in the planning process uncovered opportunities to better serve individuals with barriers to employment. In Ohio, officials said they involved the state agency that oversees the Vocational Rehabilitation program and advocates for individuals with disabilities in planning efforts for the first time, which helped identify opportunities to better serve these individuals through the workforce system. For example, these officials said that including these stakeholders led them to train staff at all of their one-stop centers on disability awareness. In addition, a Colorado official told us that involving new stakeholders in the planning process led workforce programs to assess their efforts to conduct outreach and provide services to homeless individuals. Overall, officials in each of the five states reported that the WIOA planning process reinforced important lessons from past planning efforts about when and how to obtain stakeholder input. For example, Kentucky officials noted the importance of having all stakeholders—including the business community—provide input into the planning process from the very beginning. Similarly, Colorado officials said they used a transparent planning process that involved county commissioners, local workforce boards, and the business community to ensure buy-in and support for the state plan. Each of the five selected states consulted multiple sources of labor market information (LMI), which helped officials in some states better align career pathways strategies with employer needs, and helped officials in all five states draw regional boundaries and assign local workforce areas to those proposed regions. Specifically, each of the five states reported they supplemented the traditional LMI provided by the state LMI office with real-time LMI, and three of them—Colorado, Kentucky, and Ohio—reported validating this information with employers to confirm its accuracy as they developed their state plans. In these three states, officials said that validating this information with employers helped better align their career pathways strategies with employers’ needs. In Colorado, officials said one example is that when they asked employers to review projections that the state would experience a shortage of workers in the medical industry over the next 10 years, employers said they actually would need more workers over a shorter time period. Officials then used this information to focus their career pathways strategies on preparing individuals to fill the jobs projected to have worker shortages by identifying entry-level jobs that could eventually lead to these higher-level jobs. In Ohio, officials said they surveyed employers to identify their most urgent workforce needs and incorporated this information into a searchable online database of in-demand jobs that guided their planning. According to officials, this information from employers helped better align career pathways strategies with employers’ needs for workers with certain credentials or certificates. In each of the five selected states, officials told us they also used multiple sources of LMI to help draw the proposed boundaries of the new regions required under WIOA. Specifically, officials in each of the states said they considered labor market and economic development areas, as required by WIOA, in addition to a variety of other factors. These factors included the locations of key industries, commuting patterns, local workforce areas’ existing partnerships across local area boundaries, and where population centers were located. Officials in each of the five states told us that they intend their state plans to provide a high-level vision that allows regions and local areas the flexibility to develop workforce strategies that respond to specific regional and local needs. For example, Colorado officials said they intend their state plan to provide a framework for regional and local entities as they develop career pathways strategies that are driven by the needs of regional industries. The state plan references Colorado’s step-by-step guide for building regional career pathways systems, including goals, outcomes, and the roles of different partners. Similarly, California officials described their plan as a broad conceptual framework for building regional and local partnerships rather than an operations manual. The plan requires regions to develop customized “regional sector pathway” programs, which are career pathways programs that help job seekers attain postsecondary credentials that are valued by employers and aligned with regional workforce needs. To implement their state plans, officials in each of the five states said they have developed, are developing, or plan to develop guidance for regional and local planning, including guidance on implementing career pathways and sector partnerships. For example, in September 2016, California issued guidance for developing regional and local plans which sets the state’s policy direction for these plans and clarifies the purpose of the regional plan as compared to the local plan. According to the guidance, the primary purpose of regional plans is to develop regional sector pathway programs, while the primary purpose of local plans is to facilitate job seekers’ access to local services that lead to regional sector pathway programs. The state plans to provide additional guidance and technical assistance materials about model regional and local partnerships, including best practices, as local officials develop and implement these plans. In two states, officials told us that regional and local plans were due by June 2016, and in the other three states, officials said they expect regions and local areas to submit these plans by spring 2017. Furthermore, officials in all five states said they expect regional or local plans to address how funding could be shared across programs to implement career pathways and sector partnership strategies, although they acknowledged that these decisions will involve further discussions between state and local partners. Officials in three states said they did not address cost-sharing issues in their state plans because they believed that the state plan was not the right vehicle for addressing these issues or because they did not have sufficient time to make these decisions prior to the submission deadline. Officials in the selected states reported facing challenges establishing regions, which they addressed by revising their regional boundaries or increasing the number of regions and by providing incentives for future regional collaboration or innovation. As previously noted, a change under WIOA is that states are required to establish regions, which may be made up of multiple local workforce areas that are to work together to develop regional workforce plans. According to DOL, the purpose of these regions is to align workforce development activities and resources with larger regional economic development areas and available resources in order to provide coordinated and efficient services to both job seekers and employers. In addition, DOL stated that regional partnerships better support the implementation of sector partnership and career pathways strategies, and may also lower costs and increase the effectiveness of service delivery through greater coordination. As previously noted, officials in each of the five selected states reported that they have made or will make major changes to implement these new regional requirements under WIOA, even though three of the states had some prior experience with regional planning. To draw proposed regional boundaries and assign local workforce areas to these regions, officials in the five selected states said they reviewed multiple sources of LMI and also consulted with local officials. For example, California officials told us that they developed a draft regional map in consultation with local workforce boards and state workforce associations. Officials in Colorado and Ohio said they considered dividing some local workforce areas into multiple regions in order to reflect regional labor markets or existing economic development regions. Colorado proposed this approach in its initial state plan. Specifically, Colorado officials told us they proposed dividing a large local area that covers most of the state into four different regions to align with regional economies. However, state officials said that when DOL conducted an early review of their state plan during the public comment period before they officially submitted it, DOL officials advised them that they could not divide a single local area into multiple regions. (For Colorado’s initial and final regional maps, see appendix II.) DOL’s final regulations also reflect this interpretation of WIOA in that they require regions to consist of one or more local areas. DOL officials explained that if local areas were divided into multiple regions, they would be required to participate in multiple regional planning efforts, which could be burdensome. Given that local areas cannot be split between regions, officials noted the importance of drawing local area boundaries in order to align them with regional economies, although they acknowledged that this can be challenging due to historical and political factors. For example, some local officials may be reluctant to change local area boundaries that have been in place for many years. Despite their efforts to consult local officials when developing proposed regions, officials in four states said they faced challenges due to local workforce areas’ concerns about becoming part of a region, including questions about funding and the possibility that regional priorities could differ from local priorities. As a result, in three states officials told us they revised regional boundaries or increased the number of regions in response to feedback or appeals from local areas. For example, in California, after the state board received and granted appeals from two local areas, officials told us they made minor changes to regional boundaries. Officials said that one local area requested it be assigned to a different region due to commuting patterns and its existing relationships with community colleges. Similarly, Colorado officials said that successful state-level appeals from two local areas led them to revise regional boundaries and create two new regions. In addition, Ohio officials told us their initial draft plan included five regions but they ultimately designated 10 regions after a local area successfully appealed its local area designation to DOL, and the federal agencies issued final regulations, both of which affected the overall regional configuration. In three of the five states, the next step in their regional planning process is for regions and local areas to develop workforce plans. DOL and Education officials said they are aware that states have faced challenges designating regions, and plan to provide states with technical assistance on implementing other WIOA regional planning requirements. In addition, the agencies have developed the Innovation and Opportunity Network, an online community of practice through which they can facilitate technical assistance, information sharing, and training to help states, regions, and local areas implement WIOA. To help ensure that regional planning efforts continue to progress, officials in four of the five states reported that their states are providing incentives for collaboration or innovation at the regional level. For example, California officials said the state is using a portion of the governor’s discretionary WIOA funding to support the SlingShot initiative, which aims to build the regional coordination and leadership necessary for effective regional planning. This initiative supports collaborative efforts by a broad range of stakeholders within a region to identify and then work to solve employment challenges. California is also using discretionary funding to support the Workforce Accelerator Fund initiative, which aims to better align existing programs to accelerate employment outcomes for populations with barriers to employment. In Kentucky, officials said the state is supporting regional collaboration and innovation by using the governor’s discretionary funding to encourage the development of regional sector partnerships. In addition, a Kentucky official said the state is also using a private grant to fund regional efforts to develop career pathways and sector partnership strategies and ensure that they help job seekers obtain credentials that are valued by employers. We provided a draft of this report to the Department of Education and the Department of Labor for review and comment. Both departments provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Labor, the Secretary of Education, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We examined 1) the approaches selected states have taken to develop plans for implementing Workforce Innovation and Opportunity Act (WIOA) provisions related to career pathways, sector partnerships, and regional planning, and 2) the related planning challenges, if any, these states have encountered and how they have addressed them. To address both objectives, we reviewed relevant federal laws, regulations, and guidance; conducted case studies in five states; and interviewed federal Department of Labor (DOL) and Department of Education (Education) officials. To conduct case studies, we selected five states (California, Colorado, Kentucky, Ohio, and Pennsylvania), which were among those identified by national associations as 1) having substantial experience developing and implementing career pathways, sector partnerships, or regional planning prior to WIOA, and/or 2) making significant changes to implement any of these strategies under WIOA. Specifically, we asked four national associations that represent or have researched the state entities involved in WIOA strategic planning—the National Association of Workforce Boards, National Association of State Workforce Agencies, National Governors Association, and National Skills Coalition—to identify states that met one or both of our selection criteria. Associations could recommend states in any of the following six categories: experienced with career pathways strategies; experienced with sector partnership strategies; experienced with regional planning strategies; making significant changes to implement career pathways strategies; making significant changes to implement sector partnership making significant changes to implement regional planning strategies. To select states that had various levels of experience with these three strategies, we considered input from national associations and reviewed relevant reports. Specifically, we considered the number of recommendations that individual states received, and maximized representation across the categories in which the states were recommended. We also considered the number of associations that recommended an individual state. In addition, we reviewed information from two National Skills Coalition reports on sector partnerships and state plans. All five selected states were mentioned in at least one of these reports. In the selected states, we interviewed officials including the executive leadership of the state workforce development board, state board members, and state agency officials—such as officials from the state workforce, human services, or education agency—who were involved in developing the state plan. The information we obtained from state officials provides in-depth examples but is not generalizable to all states. In order to gather further information on their planned strategies and corroborate information from interviews, we reviewed the plans submitted by these five states to DOL and Education and other relevant documentation, such as documents summarizing the state planning process, state board meeting minutes and agendas, state guidance for local workforce areas, and other reports or materials produced by the state board. After reviewing and analyzing information from the states, we asked the state officials we interviewed to review relevant portions of the draft report for accuracy. We conducted this performance audit from September 2015 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to Colorado state workforce board officials, they proposed dividing a large local workforce area that covers most of the state into four different regions to align with regional economies. However, officials said that when the Department of Labor (DOL) conducted an early review of their state plan during the public comment period, before they officially submitted it, DOL officials advised them that they could not divide a single local area into multiple regions (see fig. 2). DOL’s final regulations also reflect this interpretation of WIOA in that they require regions to consist of one or more local areas. In addition to the contact named above, Danielle Giese (Assistant Director), Caitlin Croake (Analyst-in-Charge), Jennifer Cook, Adam Gomez, Drew Nelson, and Paul Wright, along with Susan Aschoff, Holly Dye, Suellen Foth, Alex Galuten, Farrah Graham, Thomas James, Bill MacBlane, Dan Meyer, Chris Morehouse, Mimi Nguyen, Michelle Sager, and Almeta Spencer made significant contributions to this report. Workforce Innovation and Opportunity Act: Information on Planned Changes to State Reporting and Related Challenges. GAO-16-287. Washington, D.C.: March 7, 2016. Workforce Innovation and Opportunity Act: Performance Reporting and Related Challenges. GAO-15-764R. Washington, D.C.: Sept. 23, 2015. Workforce Investment Act: Local Areas Face Challenges Helping Employers Fill Some Types of Skilled Jobs. GAO-14-19. Washington, D.C.: Dec. 2, 2013. Workforce Investment Act: Innovative Collaborations between Workforce Boards and Employers Helped Meet Local Needs. GAO-12-97. Washington, D.C.: Jan. 19, 2012. | Enacted in 2014, WIOA aims, in part, to increase coordination among federal workforce development programs, which are administered primarily by DOL and Education. GAO was asked to review selected states' approaches for addressing certain WIOA provisions in their state workforce plans. GAO examined 1) approaches selected states have taken to develop plans for implementing career pathways, sector partnerships, and regional planning strategies, and 2) related planning challenges these states have encountered and how they have addressed them. GAO reviewed relevant federal laws, regulations, and guidance. GAO also conducted case studies in five states (California, Colorado, Kentucky, Ohio, and Pennsylvania). GAO selected these states based on input from national associations about their level of experience with career pathways, sector partnerships, and regional planning strategies, and GAO's review of relevant reports. In these states, GAO interviewed state workforce board officials and agency officials who were involved in developing the state plan. The information GAO obtained provides in-depth examples but is not generalizable to all states. GAO also reviewed state plans and other relevant documentation. In addition, GAO interviewed DOL and Education officials. Officials in five selected states reported using three main approaches to develop plans for implementing career pathways, sector partnerships, and regional planning strategies under the Workforce Innovation and Opportunity Act (WIOA). GAO selected states that had various levels of experience with these strategies. Specifically, as a condition of receiving funding, federal agencies require state plans under WIOA to include career pathways strategies, which align education, job training, and support services to help job seekers obtain employment. These plans must also include sector partnership strategies, which help employers in an industry address shared goals and hiring needs. In addition, states are required to establish regions, which may be made up of multiple local workforce areas. According to the Department of Labor (DOL), these regions are intended in part to align workforce activities with regional economies. To address these requirements, officials in each of the five states reported: Building on prior experience to enhance career pathways, sector partnerships, or both strategies . For example, Pennsylvania's state plan proposes building on over 10 years of experience with sector partnerships by increasing technical assistance for them and exploring the development of a certification program for these partnerships. Increasing the involvement of stakeholders, which uncovered ways to enhance services in most selected states . For example, Ohio officials said they involved the state agency that oversees employment services for individuals with disabilities and advocates for these individuals in planning efforts for the first time, which led them to provide training on disability awareness for staff at all local workforce centers. Using multiple sources of labor market information, which helped better align career pathways strategies with employer needs in some selected states . For example, Colorado officials said they asked employers to review projected worker shortages in the medical industry over the next 10 years, and employers said they would need more workers over a shorter period of time. Officials used this information to focus their career pathways strategies on preparing individuals to eventually fill the jobs that were projected to have worker shortages. Officials in four of the states GAO selected reported facing challenges establishing regions due to local areas' concerns, which they addressed by revising regional boundaries or increasing the number of regions and by providing incentives for regional collaboration or innovation. In three states, officials said they revised their regions in response to local concerns. For example, California officials said they redrew regional boundaries after a local area requested that it be assigned to a different region based on commuting patterns, among other factors. In addition, to encourage regional collaboration or innovation, officials in four states reported providing financial incentives. For example, a Kentucky official said the state is using a private grant to fund regional efforts to develop career pathways and sector partnership strategies. Additionally, DOL and Department of Education (Education) officials told us that they plan to support states with related technical assistance. GAO is not making recommendations in this report. DOL and Education provided technical comments on a draft of this report, which GAO incorporated as appropriate. |
The U.S. rail transit system consists of the following three primary modes: commuter rail, heavy rail transit, and light rail transit (see figs. 1 to 3). The numbers in figures 1 through 3 are based on National Transit Database information, current for 2008, adjusted by GAO for two systems and for a reporting error. Current FTA data include streetcars as part of light rail, but streetcars can be distinguished from other light rail cars because they are usually smaller and designed for shorter routes, more frequent stops, and lower travel speeds. Transit agencies in six large cities—New York; Chicago; Washington, D.C.; Boston; Philadelphia; and San Francisco—own the majority of passenger transit rail cars in the United States. (See fig. 4.) Agencies in these six cities manage over 16,000 rail cars, or more than 80 percent of all the active U.S. transit rail cars. The number of transit systems is increasing beyond the large metropolitan areas that currently dominate the market, although new systems tend to be small. For example, the Utah Transit Authority began operating a 45-car commuter rail system for Salt Lake City in April 2008. The Puerto Rico Highway and Transportation Authority began operating a 74-car heavy rail system in San Juan in December 2004. The Valley Metro in Phoenix began operating a 50-car light rail system in December 2008. In addition, several new streetcar systems have opened within the last decade in cities such as Portland, Oregon; Tampa, Florida; and Little Rock, Arkansas. Furthermore, additional cities—such as Oklahoma City, Oklahoma; Boise, Idaho; and Cincinnati, Ohio—have plans for transit-oriented development that include new streetcar lines. Rail car procurements generally take years to complete and can involve many technical experts, including consultants. A time frame of 3 to 4 years is considered quick for a complete procurement, and many take much longer. For example, according to officials at one transit agency, it can take about 8 years from design to final acceptance for heavy rail cars. The procurement process is lengthy because it involves four phases: the transit agency’s initial design; advertising, communication, and contract award; the manufacturer’s detailed car design, prototype development, and testing; and production. (See table 1.) Foreign-based companies supply most of the U.S. market for passenger rail transit cars. Over the last decade, foreign-based companies, with U.S. plants, have produced almost all of the more than 8,000 new rail cars purchased by U.S. transit agencies. For example, Bombardier (Canada) has been a major builder of commuter cars for U.S. transit agencies. Alstom (France) and Kawasaki (Japan) have been major suppliers of heavy rail cars, and Siemens (Germany) and Kinki Sharyo (Japan) have been major suppliers of light rail cars. U.S.-based rail car manufacturers serve niche markets for streetcars or unconventional commuter rail cars, with typically sporadic and small orders—fewer than 20 cars. The U.S. rail car market is a small percentage of the world market for rail cars. In particular, the U.S. transit rail car fleet constitutes about 5 percent of the worldwide total. Other countries, such as Japan and Germany, which have smaller populations than the United States, account for a larger percentage of the world transit rail car fleet. Officials from rail car manufacturers said that these countries have relatively more resources invested in public transit infrastructure compared with the United States. The small fleet size of the United States also correlates with a small share of the annual world demand for newly manufactured cars by U.S. transit agencies due to the limited extent of our transit rail systems, relative to other countries. See figure 5 for percentages of the worldwide transit rail car fleet in different locations. In addition to the relatively small overall demand for rail cars in the United States, individual rail car orders are often small. Almost half of the transit agencies we interviewed procure rail cars in relatively small quantities. For example, United Streetcar, a streetcar producer, told us that it expects orders of just three to six cars at a time. The level of U.S. rail car purchases is also uneven over time. See figure 6 for the number of rail cars produced per year for U.S. transit agencies from 1970 through 2008. The erratic nature of the U.S. market is primarily due to the following reasons: Large transit agencies, such as the Metropolitan Transportation Authority (MTA) New York City Transit, procuring cars in large orders that cause spikes in the market. For example, over half of the cars built in 2001 and 2002 were for a MTA New York City Transit procurement of over 1,600 cars. Replacements of existing fleets, which are dependent upon the life cycle of the fleets. Transit agencies generally do not procure rail cars on an annual basis, and transit rail cars typically last 25 years or more with a midlife overhaul, depending upon the materials used and the car design. Some individual transit agencies may replace their fleets of rail cars at the same time. For example, the Bay Area Rapid Transit is in the process of purchasing 775 cars to replace and expand its entire fleet of 669 cars. U.S. rail car designs have a great deal of customization that differs among transit agencies due to legacy infrastructure design, interoperability concerns with existing fleets, and local preferences. In particular, many heavy rail transit agencies have systems that require rail cars with customized designs, rather than standard designs. Most of these systems were built long ago, and their designs have unique characteristics, such as tunnel size, curve radii, and the ability to support rail car weight. The unique features of many systems can limit the ability of a car manufacturer to produce cars for more than one agency from one car design. For example, the Washington Metropolitan Area Transit Authority (WMATA) could not purchase the Chicago Transit Authority’s (CTA) cars because they were too tall for WMATA’s tunnels, and CTA could not purchase WMATA’s cars because they were too long to make the sharp turns of the Chicago system, according to officials from both agencies. Furthermore, when procuring rail cars for existing transit systems, agency officials generally must include specifications to ensure that the new cars will be interoperable with their existing fleets. In addition to design features based on infrastructure requirements, transit agencies may also request other local preferences, such as rail car compatibility with platform heights, door requirements, and certain safety features. This level of specificity in transit rail car design is more common in the U.S. market than it is in other countries’ markets. Transit agency, transit association, and manufacturing officials said that rail car designs tend to be more similar among transit agencies in Western Europe and within some Asian countries, in part, because some countries have established standard performance specifications or designs that manufacturers must follow prior to building systems. While rail car manufacturer and transit agency officials said that unique infrastructure requirements are most prevalent in heavy rail systems, light rail and streetcar systems can also have unique requirements. For example, transit agency officials said that differences, such as the length of the city blocks where the cars travel, may influence the car length and overall design for light rail and streetcars. Furthermore, although officials from almost half of the commuter rail agencies we interviewed said they either had borrowed rail cars from another commuter rail agency or had jointly purchased cars with another commuter agency, other officials described unique infrastructure requirements, such as tunnel size, that necessitate customized designs. According to rail car manufacturer and transit agency officials, certain characteristics of U.S. transit agencies’ rail car orders have implications for the per-car prices that transit agencies pay for their cars. Manufacturers said that certain fixed costs related to manufacturing start- up and rail car design are key factors in the per-car price of an order. Rail car manufacturer officials said that the order size necessary to capitalize on economies of scale varies from a few cars to over 100 cars, depending upon the transit mode, the degree of customization of the car design, and certain production costs. In general, as manufacturers and component suppliers—such as door manufacturers—produce more cars using the same design and production line, the cost per car is reduced due to the manufacturers’ ability to spread the design and other fixed production costs over a larger number of cars. Additionally, if more units are purchased, component costs are usually lower on a per-unit basis because suppliers are also able to capture greater economies of scale in component production. As a result, transit agencies with large orders, such as the MTA New York City Transit, have been able to get relatively low per-car prices. However, certain characteristics of other U.S. transit agencies’ demand for rail cars may prevent manufacturers and suppliers from capitalizing on these benefits. These characteristics include the following: Small orders and customized designs: U.S. rail car orders tend to be small and customized, which results in higher per-car costs. First, officials from all four of the major manufacturers we interviewed said that the cost to design cars for a particular procurement is a significant up-front cost, and that it is important to be able to spread design costs over a number of cars to obtain economies of scale. Officials from several manufacturers said that designing a rail car on the basis of unique specifications can add from $20 million to $100 million to the cost of the order. However, the degree of design specificity is also a factor. If the design is fairly standard, the design costs will be lower, enabling the efficient production of a smaller order of cars. However, if the design is highly customized, design costs are greater, and it may be very expensive to produce a small order. Second, to build customized cars, manufacturers may have to retool their production lines for each procurement. This retooling results in start-up costs that are embedded in the price per car. Manufacturers said that once there is a break in production, expenses are incurred because manufacturers and component suppliers may need to reconfigure or retool their production line before they can begin producing rail cars and their component parts. For example, officials from the Virginia Railway Express (VRE), a commuter rail system, said that when they purchased one set of cars in 2006, they were able to obtain a price of approximately $1.6 million per car, which was considered to be a favorable price, because the manufacturer had recently finished production of similar cars for Chicago’s Metra. However, in a later procurement in 2010, VRE paid approximately $2.2 million per car, which was significantly higher, in part because the manufacturer had to restart the production line for this car design. The impact of design and start-up costs are exacerbated by the small order sizes that are typical in the United States. Erratic nature of transit agencies’ procurements: The uneven nature of the U.S. transit agencies’ procurements also impacts the price of rail cars. Officials from five of the six manufacturers we interviewed said that the erratic nature of U.S. rail car demand reduced their ability to maintain continuous production, which likely results in higher production costs per rail car. Because transit agencies in the United States may procure many cars in some years but few in others, manufacturers and component suppliers may have to close production facilities or produce other goods during the years that fewer cars are procured. Therefore, as we have previously mentioned, they may have additional start-up costs when demand recovers, and they increase their rail car production capacity to meet this demand. In contrast, rail car manufacturers said that in other markets (e.g., Europe), there may be a more stable demand, which allows them to maintain more consistent operations and avoid the costs of increasing and decreasing capacity. For example, according to a transit association official who we interviewed, there is a large tramway system in Düsseldorf, Germany, that orders about 15 to 20 cars per year. Although this is not a large order, it is continuous and helps the manufacturer maintain consistent operations. Because of the lengthy rail car procurement and manufacturing process, manufacturers also face financial risks from the volatile nature of the prices of the commodities used to manufacture rail cars; and rail car prices reflect these risks. Officials from one transit agency said that it can take a transit agency up to 8 years from when the decision is made to purchase new cars to when the cars are delivered. However, in the United States, contracts for rail cars are usually negotiated as a fixed price— meaning that manufacturers bid on a price for a set of cars that remains the same, even if certain costs of producing the cars change during the lengthy production. Rail car manufacturers estimate future prices of key commodities, such as copper and steel, when entering a proposal to build rail cars, but they maintain risk that these commodity prices could change in ways they did not expect. Officials from all six of the manufacturers and almost half of the transit agencies we interviewed said that manufacturers face significant risk related to variable prices for commodities such as steel. For example, on the MTA New York City Transit’s current heavy rail car procurement, the recent fluctuation in commodity prices for copper and steel surprised one manufacturer. The manufacturer had locked in to a price for the cars in the base contract, so the price fluctuations caused the order to be less profitable. While manufacturers said that they may engage in hedging strategies—such as buying futures contracts on commodities—to mitigate these risks, they also said that adequately hedging these risks can be difficult. Most of the transit agencies we contacted used some type of federal funding, such as the New Starts program, to purchase rail cars for their systems. FTA’s New Starts program provides federal funding for the initial rail car purchases needed to support service on a newly constructed line or extension. Transit agencies can use FTA’s Fixed Guideway Modernization Funds or Section 5307 (Urbanized Area) formula funds to purchase additional or replacement rail cars. More recently, transit agencies and municipalities have used funding from the American Recovery and Reinvestment Act of 2009 (Recovery Act) specifically made available for transit projects or Recovery Act Transportation Investment Generating Economic Recovery (also known as TIGER) grants. For example, the city of Houston received a Recovery Act grant for $87 million to expand its system and purchase additional vehicles. Some of the transit agencies we contacted used state or local funds for subsequent rail car purchases, either to replace aging rail cars or to provide additional capacity to their systems. State or local governments fund rail car purchases with local revenues, state grants, or bonds—such as those that are repaid from transit agency revenues or taxes levied on real estate located in special tax districts. For example, according to transit agency officials, over a 10-year period, the MTA Long Island Rail Road and the Metro North Railroad purchased 1,172 rail cars without federal funds. When transit agencies use federal funds, federal procurement requirements apply. Transit agency and FTA officials identified some of the procurement requirements that apply to transit rail car procurements. These requirements center on compliance with “Buy America” legislation and on whether contracts are awarded in a manner that promotes free and open competition. FTA relies primarily on self-certification, but also conducts triennial and periodic procurement reviews to help ensure compliance with these requirements. The “Buy America” requirement specifies that the cost of rail car components manufactured in the United States must be more than 60 percent of the cost of all component parts, and that the rail cars themselves must be assembled in the United States. Under certain circumstances, FTA has the authority to grant waivers to transit agencies, allowing them to purchase rail cars that may not fully meet “Buy America” requirements. Specifically, a waiver can be granted if (1) a product manufactured in the United States was not available, (2) the cost of U.S.- made rail car component parts was prohibitive, or (3) FTA deems a purchase from a foreign manufacturer to be in the best interest of the public. For example, FTA granted a waiver for a transit agency to purchase diesel-powered transit cars manufactured in another country because that type of vehicle was not manufactured in the United States. In another case, FTA approved a waiver for a transit agency’s purchase of a prototype rail car made in another country because it would be used for testing the vehicle’s performance. As part of this agreement, the remainder of the cars in the order was assembled in the United States and complied with the 60 percent domestic content requirement, which is computed on the cost of components and subcomponents. According to DOT officials, these waivers are considered on a case-by-case basis. Transit agencies also must comply with other requirements—described in FTA guidance—when they use federal funds to purchase rail cars. For example, Congress has placed a 5-year limit on transit agencies exercising options to purchase additional rail cars under an existing contract. According to FTA officials, this limitation promotes free and open competition because it presents other manufacturers with the opportunity to bid on rail car purchases that would otherwise go to a single company year after year. In addition, to assist transit agencies FTA has issued a manual—the Best Practices Procurement Manual—that describes various procurement requirements and how they can be met. For example, FTA’s guidance encourages transit agencies to jointly procure rail cars with other transit agencies in order to save money, if possible. However, there are certain limitations and procedures that must be followed. FTA’s manual provides a roadmap on how to conduct joint procurements as well as how options to purchase additional vehicles under one contract can be assigned to another agency. Finally, for FTA New Starts and major capital projects costing over $100 million, FTA monitors the project’s progress to determine whether a project is on time, within budget, in conformance with design criteria, constructed to approved plans and specifications, and efficiently and effectively implemented. FTA’s review of these projects includes a review of fleet management plans to ensure that transit agencies will be capable of operating and maintaining their rail cars, and that the number of rail cars to be purchased is justified by the anticipated ridership. FTA also reviews the transit agencies’ design specifications for rail cars to help ensure that the specifications are not so narrowly defined that competition would be limited to a single bidder. According to FTA officials, their input into transit rail car design is limited to ensuring that Americans with Disabilities Act of 1990 (ADA) requirements are met and does not include applying uniform design standards for commuter, heavy, or light transit rail cars. The ADA requires that transit agencies make transit systems accessible to persons with disabilities, so transit rail cars must be designed so that a disabled person can board the car without assistance. FRA sets regulations for commuter rail safety and also enforces ADA requirements. The federal government has had a more active role in setting safety standards for some rail cars. Specifically: FRA has established safety standards for commuter passenger rail cars that travel on tracks that also carry freight rail traffic. FRA enlisted APTA’s assistance to help develop safety standards for commuter rail cars and then expanded the effort to establish industry standards and recommended practices for commuter rail car safety. According to FRA and APTA officials, this has led to greater uniformity in the design and production of commuter rail cars. This greater uniformity could alleviate some of the market difficulties that we previously discussed resulting from customized designs. These standards and recommended practices do not apply to heavy or light rail transit systems because these transit systems do not share tracks with freight train traffic, which is generally a prerequisite for FRA oversight. While FRA sets safety standards for commuter rail cars, FTA has adopted APTA “industry standards” and recommendations for transit rail car safety into its safety requirements. Consequently, transit agencies are required to follow the industry standards and recommended practices for safety, but, unlike FRA, FTA does not have direct oversight of compliance or enforcement authority. Instead, FTA requires states to set up safety oversight organizations to ensure compliance. However, in December 2009, the Secretary of Transportation proposed legislation that would give FTA the authority to establish and enforce minimum federal safety standards for rail transit systems that received federal transit funding. This authority would provide similar safety oversight for transit rail cars that FRA has for commuter rail cars sharing tracks with freight trains. In February 2010, legislation was introduced that would provide that authority. Many of the transit agency officials we interviewed said that securing funding is one of the main challenges that they face when procuring transit rail cars. As we have previously described, transit agencies may use federal, state, and local funds to purchase or replace cars but must weigh these purchases against other capital and operating needs. When a new line is built, the New Starts program can provide specific funding for the purchase of rail cars. For example, in 2005, FTA awarded the city of Phoenix, Arizona, $57 million to purchase 36 light rail vehicles as part of its full funding grant agreement for the Valley Metro light rail project. However, once a line is built, there is no federal program that provides specific funding solely for rehabilitating or replacing transit rail cars. Transit agencies may use other sources of federal funding, such as Fixed Guideway Modernization funds or Section 5307 (Urbanized Area) formula funds, to rehabilitate rail cars but often have many other needs competing for these same funds, including the purchase of new rail cars. Transit agency officials we interviewed also cited several challenges specifically related to using federal funds to purchase transit rail cars. Transit agencies often replace entire fleets or generations of rail cars at one time as the rail cars approach their replacement age—typically, 25 years or more. Transit agencies receive federal funding at a relatively steady level over time, and, therefore, it can be difficult to obtain the amount of funding needed at one time to replace a fleet or generation of rail cars. Transit rail car procurement can take several years. Some transit agency officials told us that they cannot rely on federal funding for these purchases because they do not know how much money they will receive that far into the future. In addition, some transit agency officials told us that a federal requirement intended to encourage competition among manufacturers creates challenges. Specifically, the requirement limits agencies’ ability to exercise options to a 5-year period once a contract is signed if federal funds are used. Although this requirement is in place to ensure that the rail car market is fair and open, transit agency officials report this is burdensome, because if they decide to procure new cars after the 5th year, they must the initiate the procurement process—which is both lengthy and costly— all over again. Some of the transit agencies and manufacturers we interviewed identified specific legal and regulatory factors that pose challenges in the procurement process. Transit agency officials identified some federal requirements that impact their rail car procurements. For example, some officials told us that while they support the need for ADA requirements, these requirements can be costly to implement. Officials from one agency said that when replacing a fleet, the agency needs to buy extra rail cars to compensate for the number of seats reduced to meet ADA requirements. Nonetheless, the officials indicated they have recognized the importance of the accessible service they provide and have successfully incorporated ADA requirements into their rail designs when they have purchased new cars or rehabilitated existing fleets. Likewise, rail car manufacturers have had to adjust operations to meet federal requirements. For example, to meet “Buy America” requirements, which require final assembly in the United States, some manufacturers have decided to build permanent facilities in the United States; others have built temporary facilities in the location where the order is filled. A manufacturer’s decision to build a temporary facility can impact transit agencies if, once the cars are built, manufacturers close the facilities and transit agencies have to buy certain spare parts from overseas or order them from specialty manufacturers. Because of the unique designs of rail cars, the parts may have to be specially made for the individual car design when replacement parts are needed. Some transit agency officials and manufacturers told us that they can also face difficulties when following state or local requirements. For example, a transit agency and a manufacturer said that a state law that requires full disclosure of all information, including potentially proprietary information, in the negotiation process can make it difficult to conduct negotiations and may limit the numbers of proposals received when purchasing new rail cars. Officials from another transit agency said that a state law requiring more than 9 percent sales tax on rail car purchases results in significant costs that other transit agencies do not have to pay. Another factor that affects some transit agencies—particularly new or small agencies—is a lack of experience with the procurement process. Given the 25-year expected lifespan of most rail cars, some transit agency officials may participate in only one or two procurements in an entire career and, therefore, have limited experience and must rely on design consultants. For example, the Port Authority Trans Hudson’s consultant is heavily involved in developing specifications for the current procurement to replace its entire fleet. The last time the agency procured cars was in 1967, and the staff that worked on the procurement are no longer with the agency. Transit agency officials with limited procurement experience may not recognize opportunities for cost savings when specifying their design requirements, and it may not be in the design consultant’s best interest to identify and encourage the use of standard designs. In addition, since many transit agencies procure rail cars in relatively small quantities, these agencies may not be in a position to negotiate for rail car prices in line with those of the larger agencies. Although rail car procurement can be challenging for both manufacturers and transit agencies, industry stakeholders, manufacturers, and transit agencies have identified opportunities to reduce costs through standardization and joint purchases. To a certain extent, increasing the standardization of transit rail cars could benefit transit agencies. First, it would enable manufacturers to produce rail cars for numerous agencies without incurring start-up costs resulting from breaks in the manufacturing process. Once there is a break in production, the manufacturer must arrange for rail car components to be delivered from suppliers, and some of these components have long lead times before they can be delivered. Furthermore, time is lost and expenses incurred because manufacturers need to reconfigure or retool their production line before they can begin producing a rail car. Second, standardization can benefit manufacturers and transit agencies by decreasing design costs and may enable manufacturers to take advantage of economies of scale in the manufacturing process by producing more vehicles with similar parts. However, there are arguments against standardization. Specifically, one rail expert stated that adopting a standard design can discourage innovation and inhibit research and development. Also, he reported that a standard design may include features that are unnecessary for all systems and could add to the price of each car. In addition, standardization is not possible for all systems. As we have previously described, many heavy rail systems have unique infrastructure designs. Transit agencies would need to make major infrastructure changes in order to use rail cars that are compatible with other agencies’ cars. According to FTA officials, the cost savings associated with the use of a standard design would not offset the cost of making these system changes. There may be more opportunities to standardize light rail or streetcar systems, particularly in new systems where the infrastructure has not yet been constructed. Although current U.S. transit rail car designs differ substantially among systems, past efforts have attempted to standardize transit car designs. One successful effort was the Presidents’ Conference Committee (PCC) streetcar, which was first built in 1934. The committee, which consisted of industry representatives, produced a standardized design that permitted the use of assembly line techniques by multiple manufacturers and allowed for wide variation to meet the needs of various transit agencies. The design was widely accepted, but U.S. manufacturers stopped producing PCC cars in the 1950s. However, a few PCC streetcars are still operating in the United States. For example, the Massachusetts Bay Transportation Authority (MBTA) and the Southeastern Pennsylvania Transportation Authority both have active PCC cars on certain lines. A later effort to create standardized transit rail car designs was less successful. In the 1970s, the Standard Light Rail Vehicle was promoted by the Urban Mass Transportation Administration, which created a committee to develop the car design. A company called Boeing Vertol started building cars of this design in 1973 for MBTA and the San Francisco Municipal Railway, but the cars were prone to problems that led to their early retirement. Industry associations—including APTA and the Institute of Electrical and Electronics Engineers—continue to promote standardization in transit rail car procurement to make transit rail car procurement more cost-effic As part of its standards development program, APTA convened two working groups in 2009 to develop (1) technical standards and (2) a set of standard terms and conditions for transit agencies to use when procuring light rail vehicles. These efforts are funded through membership dues and grants from FTA, the Transit Cooperative Research Program, and the Joint Program Office of the Department of Transportation. The goal of the working group is to produce a set of technical standards that transit agencies can use when procuring new light rail vehicles and that FTA could apply to light rail cars, rather than establish federal requirements.These standards may result in reduced design costs for transit agencies and allow manufacturers to take advantage of economies of scale in the manufacturing process. ient. The goal of the second working group is to develop a set of standard term and conditions that agencies can use when procuring light rail vehicles. One of the biggest challenges for transit agencies—particularly for s agencies with limited procurement experience—is writing a contract that makes it easy to identify its terms, including each party’s financial risks. Currently, each agency addresses risk in its Request for Proposals (RFP) and contracts in a different format, which makes it difficult for manufacturers to identify each party’s risks and may slow down the procurement process. The standard terms and conditions document should (1) reduce ambiguities in procurement documents, (2) allow tra agencies and manufacturers to save time, and (3) reduce the need for consultants. According to APTA officials, they expect to have a draft of standards and a draft of the contract terms and conditions guidance to industry stakeholders for their comment by late summer 2010. In addition, there appears to be a push for standardizing high-speed intercity passenger rail cars. Specifically, the Passenger Rail Investment and Improvement Act of 2008 (PRIIA) required Amtrak to establish a committee to design, set specifications for, and procure standardized nex generation train corridor equipment—such as high speed rail. Although this effort does not affect transit rail cars, it could reduce rail car design costs for intercity passenger rail. Some manufacturers have also attempted to increase the standardiza of rail cars, while providing flexibility to their clients. For example, officials from one manufacturer told us that their company has develo two standard designs that they believe can be customized to meet 80 percent of U.S. transit agencies’ needs for new light rail cars. One is a high- platform car and the other is a low-floor car. The design for a low-floor car can also be used for a streetcar. These basic designs can be custom ized by changing the components as required—for example, stronger air conditioning systems for vehicles to be used in warm weather climates. Another manufacturer has developed a basic, more affordable design commuter rail cars that can be customized to meet transit agencies’ needs—for example, a customer can change seating and interior materials, but not the shape of the car. The manufacturer also offers custom des igns, but at a higher cost. Manufacturers may have more opportunities for for standardization if transit agencies seek bids based on performance specifications that detail agencies’ needs in terms of car performance, as opposed to design specifications that detail how a car should be built. Transit agencies have attempted to decrease costs by jointly procuring transit rail cars or “piggybacking” on another transit agency’s contract to take advantage of economies of scale. A joint procurement means that rail cars are purchased by two or more agencies under the same contract. Fo r example, the Miami-Dade Transit jointly purchased heavy rail cars with the Maryland Mass Transit Administration in the 1980s. “Piggybacking” means that one transit agency exercises the options on another transit agency’s contract for rail cars of the same design. For example, the Utah Transit Authority piggybacked on San Diego Metropolitan Transit System contract for light rail vehicles. In joint procurements, all transit agencies must be named in the original contract, and the car designs must no substantially different. In piggybacking, all transit agencies and all potential option quantities must be named in the RFPs and again in the contract. Transit agencies can benefit from both of these options through reduced rail car costs resulting from economies of scale in the production process as well as reduced design costs per car and procurement costs. However, transit agency and FTA officials said that the opportunitie joint procurement and piggybacking are limited by several factors: First, some transit agencies—particularly those with heavy rail system have infrastructure that requires specific rail car features that are not common in other systems. For example, officials at the Bay Area Rapid Transit in San Francisco explained that their transit rail cars must be built from aluminum to meet weight restrictions of the infrastructure, where most other heavy rail cars in the United States are built from stainless steel. Second, transit agencies may have customized design requirements ba on local preferences that limit their opportunities for joint purchases. Officials from one transit agency told us that their riders were accustomed to rail cars with passenger-loading in the center of the car, and, therefore, they included this feature in their design specifications. The transit agency vehicles with another agency, unless would not be able to jointly purchase they both had the same basic design. Third, transit agency and FTA officials also told us that it is difficult for transit agencies to at the same time. coordinate their purchases and have funding available Finally, transit agencies are not generally aware of other transit agencies’ procurement plans, and there is no entity to formally help facilitate joint purchases. According to FTA officials, they are aware of two informal mechanisms for discussing the potential for joint procurement—FTA’s semiannual New Starts Construction Roundtable conference and APTA meetings. For example, the Agence Métropolitaine de Transport (AMT) in Canada identified an opportunity to purchase commuter cars at a reduced price at an APTA meeting. According to AMT officials, while this was not a joint procurement, they saved money because they had a similar design to the New Jersey Transit’s commuter cars that were currently under production. FTA, recognizing the financial benefits of joint procurements, piggybacking, and standardized rail cars, has recently looked for ways to encourage these activities. FTA studied the feasibility of creating an incentive system in conjunction with Section 5307 (Urbanized Area) formula grants to encourage and reward transit agencies to take the lead on joint or piggybacked procurements for buses and rail cars. As part of this study, FTA implemented a pilot program for joint procurement of buses. Three of the five pilot projects did not result in successful joint procurements, but demonstrated some of the difficulties of joint procurement. Specifically, the study found that (1) the incentives provided must be significant, (2) it is not adequate to increase the federal matching portion of existing formula funds, and (3) and it is important to maintain continuous production without significant changes to achieve potential savings. FTA did not implement a similar pilot of rail cars as part of this study. As a result of the study, FTA recommended to Congress, in a 2008 report, three alternatives to provide financial incentives to and compensate agencies that jointly procure transit rail cars: 1. FTA would award incentive grants to transit agencies that lead joint procurements to cover a portion of their program management cost. 2. FTA would award additional federal funding on the basis of the percentage of the rail car’s contract cost for transit agencies that participate in a joint procurement. 3. FTA would increase the federal match for rail cars purchased according to federally designated standard terms and specifications. According to FTA officials, Congress authorized a pilot program to provide incentives for joint bus procurements in the agency’s annual authorization. Rail transit offers society a number of benefits, including reduced congestion and pollution and increased mobility. The benefits are realized in many cities across the country. However, the relatively small and erratic market for transit rail cars in the United States can hamper transit agencies as they purchase rail cars for commuter, heavy, and light rail transit systems, including streetcars, by increasing the cost and difficulty of procuring transit rail cars. Design specifications that focus on custom designs suited for single- system use have increased the amount of work and related costs needed to design and test these cars. However, efforts are under way to promote standardized design, including APTA’s efforts to develop procurement standards for light rail cars and PRIIA’s requirement for Amtrak to set up a committee to look into designs for high speed rail systems. DOT’s support of these efforts could pay dividends into the future by making rail cars more widely available at a lower cost. In particular, systems built in the future may benefit from increased standardization if they are not limited by existing infrastructure. Joint procurements and piggybacking also have the potential to increase the financial advantages of purchasing large numbers of cars. These advantages typically have been limited to a handful of larger transit agencies, since smaller transit agencies have not purchased a sufficient amount of cars to benefit from economies of scale. While FTA’s procurement guidance encourages joint procurement, it has not established a mechanism to assist transit agencies to successfully pool their orders, and transit agencies have reported difficulties in this area. Often, transit agencies are not aware of the activities of other agencies in the procurement arena. Without a process for coordinating performance and design standards and a means of encouraging joint procurements, current practices may not substantially change. A more systematic approach to linking agencies with similar infrastructure and rail car needs could identify even more of these opportunities. Since FTA helps fund many procurements, it may be in the best position to help transit agencies identify joint procurement opportunities. To ensure that federal funds are used efficiently when procuring transit rail cars, we recommend that the Secretary of Transportation direct the Administrator of the Federal Transit Administration to, in conjunction with the American Public Transportation Association, take the following two actions: 1. Develop a process to systematically identify and communicate opportunities for transit agencies with similar needs to participate in joint procurements of transit rail cars. 2. Identify additional opportunities for standardization, especially for new systems, such as light rail and streetcar systems. We provided a draft of this report to the Department of Transportation for review and comment. The department provided comments via e-mail, generally concurred with the report, and agreed to consider the recommendations. The department also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To determine the key characteristics of the U.S. market for passenger transit rail cars, we reviewed rail car databases, interviewed transit agency officials, and spoke with officials representing the Federal Transit Administration (FTA), the American Public Transportation Association (APTA), existing and potential transit rail car manufacturers, and transit agency consultants. We used two databases—FTA’s National Transit Database and APTA’s 2009 Public Transportation Vehicle Database (APTA’s transit database)— to determine the number and modes of passenger transit rail cars in the United States. Both data sets describe the rail cars currently owned by transit agencies as well as dates the cars were made and the companies that built them. Both data sets also include information on commuter rail locomotives, although we did not analyze locomotive data for this report because locomotives carry no passengers and differ from other passenger rail cars in terms of technology and the companies that produce them. To assess the reliability of transit rail car inventory data from the National Transit Database and APTA, we interviewed FTA and APTA officials about data quality control procedures and reviewed relevant documentation. We reviewed the data for missing information and any obvious errors. We corrected National Transit Database data for one transit agency, based on information obtained directly from that agency. We determined that the data were sufficiently reliable for the purposes of this report. We selected 23 transit agencies to interview about topics for our three objectives. We conducted site visits at most of these transit agencies and interviewed the rest by telephone. To select them, we used the National Transit Database, which was current for U.S. transit agencies, as of 2008, according to FTA officials. We adjusted this database to include additional transit rail service reflected in APTA’s transit database. Thus, we added 1 agency, the Valley Metro Rail of Phoenix, Arizona, that started its first rail service late in 2008 with a light rail line. We also added commuter rail service, started in 2009, by the Tri-County Metropolitan Transportation District of Oregon, an agency that had previously operated light rail transit, for a total of 54 transit agencies. From the list, we judgmentally selected transit agencies on the basis of their size, rail transit modes (commuter rail, heavy rail, and light rail), and geographic distribution. The 23 agencies we contacted collectively managed about 17,600 rail cars (88 percent) of all 19,841 rail cars managed by U.S. transit agencies. These transit agencies represent 42 percent of the 54 transit agencies we identified through the previously mentioned transit databases. However, the results of our work are not generalizeable to all transit agencies. As shown in table 2, our sample agencies managed cars that approximate the distribution of rail cars in the U.S. fleet. The 23 transit agencies we contacted, representing 33 types of transit systems, were located in 13 states and the District of Columbia and were distributed across the country, as shown in table 3. In addition, we interviewed two consulting companies working with the transit agencies that we interviewed. One contractor, Louis T. Klauder and Associates, was serving as a car consultant for the Port Authority Trans Hudson at the time of our visit. The other contractor, Virginkar & Associates, Inc., was serving as rail car procurement contractor for the Los Angeles County Metropolitan Transit Authority at the time of our visit. To determine how the U.S. market for transit rail cars compares with international markets for transit rail cars, we reviewed data obtained from SCI Verkehr in Cologne, Germany. To assess the reliability of transit rail car inventory data from SCI Verkehr, we interviewed a company official about data quality control procedures. We determined that the data were sufficiently reliable for purposes of this report. Furthermore, with the help of the Department of State, we contacted: domestic and multinational rail car manufacturers; Agence Métropolitaine de Transport—the commuter rail service provider for Montreal, Canada; rail officials from Canada, Japan, New Zealand, and Portugal; Korean Board of Audit and Inspection; and European railway associations, including the Light Rail Transit Association, headquartered in the United Kingdom; and UNIFE—the Association of the European Rail Industry. To determine the key characteristics of the U.S. market for passenger transit rail cars and to determine how the U.S market for transit rail cars compares with international markets for transit rail cars, we judgmentally selected six companies that are existing and potential transit rail car builders. Five companies were selected mainly due to their status as either a major producer in the U.S. transit rail car market or a U.S.-based producer. One company was selected due to the relevance of its October 2009 congressional testimony about the U.S. rail car market and rail car design standardization initiatives. These companies were as follows: Kawasaki Rail Car, Inc. United Streetcar/ Oregon Iron Works To determine the federal government’s role in funding and setting standards, we reviewed applicable federal law, regulations, guidance, and grants and interviewed FTA officials at headquarters and select regional offices. We also interviewed APTA officials regarding the federal government’s role in setting design standards. To identify any challenges that transit agencies face when procuring transit rail cars, we met with transit agencies representing 33 types of transit systems across the country, transit rail car manufacturers, transit agency consultants, and FTA and APTA officials. We conducted this performance audit from September 2009 through June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We provided a draft of our report to the Department of Transportation and incorporated its comments in the report as appropriate. In addition to the contact named above, Catherine Colwell, Assistant Director; Amy Abramowitz; Richard Calhoon; Sarah Jones; Stephanie Purcell; Amy Rosewarne; Frank Taliaferro; and Crystal Wesco made important contributions to this report. | Rail transit offers society a number of benefits, including reduced congestion and pollution and increased mobility. However, rail systems and cars are costly: Transit agencies can pay more than $3 million per car, often using federal funds. As requested, this report describes (1) characteristics of the U.S. market for transit rail cars, (2) the federal government's role in funding and setting standards for transit rail cars, and (3) challenges transit agencies face when procuring rail cars. GAO analyzed U.S. and worldwide rail car market data for commuter, heavy, and light rail systems and interviewed Department of Transportation (DOT) officials and domestic and international industry stakeholders, including the American Public Transportation Association (APTA). U.S. demand for transit rail cars is limited and erratic and orders tend to be for customized cars. Transit rail cars in the U.S. comprise about 5 percent of the worldwide fleet. Transit agencies' purchases vary considerably over time: A large transit agency may replace its entire fleet in 1 year, contributing to a spike in the market, whereas in other years, there may be only a fraction of that demand for the U.S. market. Transit agencies often request custom car designs to address not only legacy infrastructure requirements and interoperability issues with existing fleets, but also preferences. Rail car orders of small size and demand for customized cars can increase the price per car by, for example, concentrating design costs among fewer cars. The federal government provides some funding for transit rail cars and has varying levels of involvement in setting design standards for transit rail cars. More than half of the transit agencies GAO interviewed purchased rail cars with some type of federal funding, such as formula or discretionary capital funds. When transit agencies use federal funds to purchase rail cars, certain requirements apply, such as "Buy America"--which requires, among other things, that rail cars be assembled in the United States. The Federal Transit Administration (FTA) ensures that these requirements are met by overseeing new transit projects and through periodic reviews. The federal government's role in setting design standards for transit cars depends on the type of rail. For commuter rail, the Federal Railroad Administration has established safety standards that must be met, since these cars are intended to run on the same tracks as freight rail traffic. For other rail transit, FTA provided funds to help APTA--the standard-setting industry group--develop voluntary standards, including those for safety. However, the Secretary of Transportation proposed legislation in December 2009, which was introduced in Congress in February 2010, to give FTA more regulatory authority in relation to safety. Transit agency officials identified several challenges in procuring rail cars, including securing funding, given all of their competing needs. Manufacturers and transit agencies also face legal and regulatory requirements, such as "Buy America" requirements, but have generally adapted to challenges posed by them. However, market challenges still exist, including the small size of many orders that may affect price. Joint procurements, whereby transit agencies combine orders, can help them increase their order sizes; however, they can only combine orders if a design exists that meets both agencies' needs. While a few transit agencies have become aware of opportunities to jointly procure rail cars through informal mechanisms, such as industry meetings, there is currently no formal mechanism to identify mutually beneficial opportunities for joint procurement. As FTA helps fund many procurements, it may be in the best position to help transit agencies identify joint procurement opportunities. Furthermore, FTA and APTA have efforts under way to standardize light rail cars to make rail car procurement more efficient and cost-effective. Standards also might be beneficial for other types of systems, such as streetcars, particularly for those without existing infrastructure limitations. GAO recommends that the Secretary of Transportation direct DOT to work with APTA to (1) develop a process to systematically identify and communicate opportunities for transit agencies with similar needs to participate in joint procurement and (2) identify additional opportunities for standardization, especially for new systems. DOT reviewed a draft of this report, generally concurred with its contents, and agreed to consider the recommendations. |
OGAC sets overall PEPFAR policy and strategies and coordinates PEPFAR programs and activities, allocating funds to PEPFAR implementing agencies, primarily CDC and USAID. As of fiscal year 2012, these agencies executed PEPFAR program activities through agency headquarters offices and interagency country and regional teams in more than 30 countries and regions with PEPFAR-funded programs. OGAC coordinates the activities of these country teams through its approval of operational plans, which document work plans, budgets, and the anticipated results of HIV/AIDS-related programs. OGAC provides annual guidance on how to develop and submit operational plans. In fiscal years 2009 through 2012, OGAC approved country operational plan budgets totaling over $16 billion. USAID and CDC obligate the majority of PEPFAR funds through grants, cooperative agreements, and contracts with implementing partners, such as U.S.-based nongovernmental organizations (NGO) and partner- country governmental organizations and NGOs. With regard to supply chains for treatment programs, USAID and CDC typically provide assistance in different ways. USAID purchases the majority of ARV drugs and is the primary funder of supply chain services to partner countries through centralized contracts with large international NGOs, for-profit development assistance firms, and NGOs in partner countries. CDC provides funding to purchase ARV drugs and related commodities, including laboratory equipment and test kits, through cooperative agreements. Both USAID and CDC provide technical assistance to partner countries to support supply chain management. Health care drug supply chains involve the following six key elements (see fig. 1): product selection: selecting drugs based on national treatment guidelines and approval; forecasting and supply planning: estimating the quantity of drugs needed to ensure an uninterrupted supply; procurement: contracting with suppliers to obtain drugs within agreed-upon production and delivery time frames and costs, including freight; warehousing: maintaining appropriate security and environmental conditions (e.g., temperature and humidity); inventory management: monitoring for shortages and waste due to expired products, keeping accurate records of available and anticipated stock, and preparing orders for distribution; and distribution: managing the flow of drugs from the point of production to the end user for consumption (i.e., facilities where the drugs are dispensed to patients). All or most of these six elements involve the following three processes: information management: generating and analyzing the data needed to manage the supply chain from both a cost and service standpoint (for example, gathering consumption and inventory data to determine how much of a drug to order); human resource management: training and supervising staff responsible for placing orders, monitoring stock, and providing drugs to patients, and ensuring that key positions are filled; and quality assurance: ensuring that drugs are approved for use in a partner country, meet certain standards, undergo testing, and are monitored as they move through the supply chain. PEPFAR has taken three key steps to make ARV supply chains for treatment programs more efficient and reliable for all PEPFAR partner countries. First, PEPFAR and USAID have consolidated supply chains for PEPFAR’s ARV drug procurement, enhancing efficiency and reducing operational costs. Second, PEPFAR has improved donor coordination by creating a network to facilitate information sharing and by developing an emergency procurement mechanism. Third, PEPFAR has provided partner countries with technical assistance, such as assessment tools and training, to help strengthen their supply chains and manage them more effectively. PEPFAR has consolidated supply chains for ARV drugs across partner countries to increase the efficiency of ARV drug procurement and shipping. In 2005, USAID contracted with the Partnership for Supply Chain Management, a nonprofit consortium of over a dozen organizations, to create the Supply Chain Management System (SCMS) project, which pools procurement across more than 20 partner countries on a voluntary basis. According to USAID, this central procurement system provides those countries having less procurement capacity and smaller markets with the opportunity to benefit from the lower prices and consistent supply associated with bulk purchases. SCMS consolidated forecasts and established long-term supplier contracts to obtain favorable pricing and delivery conditions. SCMS has provided HIV/AIDS treatment supply chain services for PEPFAR-supported programs, procuring and distributing $1.43 billion in HIV/AIDS-related products, including $858 million in ARV drugs, as of March 31, 2013, according to USAID; these products include 74 percent of the ARV drugs purchased using PEPFAR funding in fiscal year 2012. In addition, USAID has taken initial steps to consolidate supply chains for PEPFAR with USAID’s other global health programs to reduce overall management time and operational costs. These steps include reviewing its current supply chains and developing a consolidation proposal. Specifically, USAID announced in June 2012 a plan to offer a new, consolidated contract that is to combine supply chains managed by SCMS with those managed by the USAID DELIVER PROJECT (DELIVER), which works with over 20 national and international organizations to procure non-ARV health care commodities, such as condoms, for PEPFAR and other USAID health programs. According to USAID, consolidating these supply chains will help ensure an uninterrupted supply of health commodities for PEPFAR and the other programs, reduce costs to the U.S. government, and mitigate risk through collaborative strategies that will include forecasting, warehousing, and distribution. PEPFAR has also coordinated with other donors to improve those elements of the supply chain that operate within the control of partner countries. In particular, it has developed an information-sharing network and established an emergency procurement mechanism. Information-sharing network. In June 2006, PEPFAR developed an information-sharing network with the Global Fund to Fight AIDS, Tuberculosis and Malaria (Global Fund) and the World Bank called the Coordinated Procurement Planning Initiative. Since then, other key organizations, including OGAC, USAID, the Global Fund, three UN entities, and two NGOs, have used the network to identify and address ARV drug supply chain weaknesses. SCMS facilitates information sharing within the network. The network has taken a number of steps to improve supply forecasting and procurement to support the availability of ARV drugs and other health care commodities needed by HIV/AIDS patients. For example, network members have met quarterly to discuss potential gaps in delivery of ARV drugs and solutions to these gaps, including emergency disbursement of funds to procure drugs and avoid shortages. In addition, the network developed tools that alert key donors to treatment interruption crises and enable members to share best practices regarding potential approaches to address and mitigate these crises. Emergency procurement mechanism. In 2010, PEPFAR established a funding mechanism called the Emergency Commodity Fund, whose primary aim has been supporting emergency purchases of ARV drugs when threats arise to the continuity of patient treatment or critical prevention programs. According to USAID, this funding mechanism has been used to assist five countries with emergency ARV procurement when they faced problems with Global Fund grants. PEPFAR country operational plans for fiscal year 2012 reported that six other countries came close to experiencing shortages of ARV drugs and other HIV-related commodities due to Global Fund delays. For example, the Democratic Republic of Congo’s ministry of health requested that the U.S. government ensure a buffer stock of ARV drugs because the Global Fund, which was the source of a majority of the country’s commodities, had become slow in processing grants and was experiencing difficulties forecasting drug supply and keeping ARV drugs in stock. PEPFAR has also provided technical assistance to the Global Fund to improve its procurement system, with the goal of reducing the need for further emergency support from PEPFAR. According to USAID officials, in September 2012, PEPFAR helped the Global Fund develop a proposal for its own emergency procurement mechanism. As of March 2013, the Global Fund had not notified PEPFAR whether it had established this mechanism. As partner countries assume greater responsibility for managing supply chains, PEPFAR is moving from a direct supply role to a more advisory role. PEPFAR’s long-term aim is to develop effective, reliable partner- country-owned and -operated supply chain systems at the national, regional, district, and local levels. At present, PEPFAR’s role in supporting partner-country HIV program supply chains ranges from direct control of major supply chains to providing training and technical assistance. PEPFAR generally relinquishes control of the ARV drugs once they reach a partner country’s central warehouse; from that point on, partner countries are responsible for ensuring that the drugs reach patients. PEPFAR determines the level and type of supply chain assistance primarily on the basis of each country’s treatment program and supply chain capacity and the state of its HIV epidemic. For example, in some countries, PEPFAR may directly procure almost all ARV drugs and centrally control supply chains through an implementing partner such as SCMS. In other countries, PEPFAR may provide no procurement support for ARV drugs and only training and technical assistance for specific elements of the partner country’s supply chains, which carry out all supply chain functions with funding from PEPFAR, other donors, and/or the partner-country government. PEPFAR country teams provide technical assistance to support public and private drug supply chains for medical supplies, including USAID- purchased commodities. This assistance includes tools to assess supply chains and identify any weaknesses, and other types of training and advice. USAID and its implementing partners have developed and begun implementing tools for identifying any needed improvements in partner countries’ supply chains. The following are examples of two such tools: The Supply Chain Capability Maturity Model is used to identify performance problems by rating each element of a supply chain against best practices. The tool also helps identify and prioritize areas in need of strengthening and provides a method for tracking progress. In 2012, USAID piloted the Supply Chain Capability Maturity Model in three countries—Botswana, Paraguay, and South Africa—and used it to assess their supply chains. In addition, USAID used the tool to recommend targeted solutions in South Africa. The Supply Logistics Internal Control Evaluation was developed to assess the effectiveness of internal controls to mitigate supply chain risk in each element of the supply chain in countries across sub- Saharan Africa. This tool generates a series of score cards representing the estimated risk in a system and identifying strengths and weaknesses. Beginning in 2011, this tool was piloted in several countries, including Benin, Mozambique, and Zambia. PEPFAR also provides other training and advice to help partner countries strengthen their supply chains, such as audit checklists to improve supply chain management and training in how to use them, warehousing assistance, and stakeholder coordination to help partner countries identify workable solutions to supply chain problems. For example, in Nigeria, PEPFAR’s fiscal year 2012 plan includes organizing and hosting a workshop with multiple stakeholders to assist the government in identifying private-sector warehouse operators that could operate a new central warehouse with the capacity to serve the country’s treatment program. In all three partner countries we visited, PEPFAR has taken steps through its technical assistance to increase efficiencies by strengthening specific steps in the supply chain process. In South Africa, PEPFAR, through SCMS, helped the government institute procurement reforms that enabled it to cut in half the prices it pays for ARV drugs. Although South Africa was the largest single market for these drugs, it had been consistently paying prices well above the international standard. In Uganda, USAID reports that the country team has helped the Ugandan government and local NGOs successfully implement a simplified distribution network by streamlining the supply chain to create clear lines of responsibility and accountability. Previously, there were more than four ARV drug supply chains serving three types of facilities from three separate warehouses, and some facilities received supplies from more than one supply chain. USAID reported that these overlapping supply chains involved duplication of efforts and led to confusion in ordering and reporting. In addition, PEPFAR is supporting the national rollout of a new web-based ARV bimonthly reporting and ordering system, whose purpose will be to enable stakeholders to track the status of ARV supplies in all treatment sites and provide early warning of shortages to authorities at the central level. In Kenya, the country team has helped the national government establish clear and consistent distribution lines from the two central entities that procure ARV drugs and create a system whereby stock can be shifted at the central level to avoid supply gaps. OGAC guidance stresses that effective information management is essential for the efficient operation of ARV drug supply chains. However, 11 of the 16 evaluations of partner-country supply chains that we reviewed identified weaknesses in inventory controls; 7 of these 11 evaluations also cited weaknesses in record keeping, including missing or inaccurate drug consumption data. These weaknesses may increase the risks of shortages, waste, and loss. Human resource constraints contribute to these weaknesses, and PEPFAR is making efforts to address them over the long term. OGAC operational plan guidance calls on PEPFAR country teams to describe plans to assist their respective partner countries in developing effective and sustainable treatment programs. However, this guidance does not specifically require country teams to develop plans to strengthen partner countries’ inventory controls and record keeping that adversely affect the availability of reliable data on drug consumption, waste, and loss. In addition, because OGAC does not require country teams to monitor partner countries’ progress in measuring ARV drug consumption, waste, and loss, OGAC may not be able to reliably ascertain the extent to which supply chains in partner countries are affected by shortages, waste, and loss and take appropriate action to mitigate these risks. PEPFAR guidance directs country teams to assess the extent to which partner countries experience shortages of drugs and report steps the teams take to address this problem. OGAC’s Next Generation Indicators Reference Guide recommends that country teams track the percentage of ARV distribution sites that report on inventory consumption, quality, losses, and adjustments on a monthly basis. Several U.S. and partner- country officials and implementing partners we spoke with agreed that drug consumption data, particularly at the health facility level, are essential for ascertaining and meeting demand. Two implementing partners involved in managing health care commodities also noted the importance of record keeping for avoiding wasted or lost drugs. In two of the partner countries we visited, good record keeping, particularly the collection of reliable consumption data, led to increased efficiency. For example, in South Africa, a USAID initiative to monitor the consumption of ARV drugs at storage sites and health facilities reduced shortages of ARV drugs over a 2-year period and enabled health facilities to identify and cancel excess orders. In Kenya, a USAID official responsible for HIV/AIDS programs reported that PEPFAR had improved inventory management and consumption monitoring for PEPFAR- and country-managed supply chains. According to the official, PEPFAR coordinated information sharing at the national level and across major facilities to identify and resolve supply chain issues, virtually eliminating ARV drug shortages. From our review of 16 supply chain evaluations conducted in seven PEPFAR partner countries since 2011, we identified inadequate inventory controls for monitoring drug supply, as well as missing or inaccurate record keeping, as key weaknesses in ARV drug supply chains controlled by partner countries. These weaknesses can increase the risks of drug shortages, waste, and loss. Eleven of the 16 supply chain evaluations we reviewed found that partner countries had inadequate inventory controls to prevent shortages, waste, and loss of ARV drugs. For example, an evaluation of Côte d’Ivoire’s supply chain indicated that inadequate supervision of drug transfers among treatment facilities and other inventory control weaknesses resulted in significant amounts of medication that could not be accounted for and were not available to intended beneficiaries. In another example, an evaluation team in Zambia found a lack of adequate inventory controls at all levels of the drug supply chain. The team was unable to rely on documentation at the facilities and had to conduct a physical count to determine the status of facilities’ drug inventories. The team identified ARV drug losses totaling about $265,000, or nearly 9 percent (by value) of the total ARV drug stock at the facilities the team visited; the loss calculation covered a 15-month period from January 2011 to March 2012. Lacking basic inventory control tools and procedures, these facilities ran the risk of not knowing whether the drugs had been dispensed to patients or distributed to other storage sites, or if they were inventory losses. Seven of these 11 supply chain evaluations we reviewed also indicated that some treatment facilities where drugs are dispensed had missing or inaccurate records. As a result, these facilities had difficulty forecasting their drug needs, ensuring that the drugs ordered would be sufficient to meet demand, or knowing what drugs were lost to theft or inadvertent waste. Three of these evaluations specifically mentioned a lack of accurate consumption data. For example, an evaluation of supply chains in Mozambique indicated that facility-level pharmacies had no monitoring or reconciliation of ARV drugs once they left the storage shelf; thus, there were no controls to ensure that the ARV drugs arrived at the dispensing table or were administered to the patient. The Mozambique evaluation team could not reconcile ARV drug consumption data with prescriptions at facility-level pharmacies. The Mozambique evaluation also indicated that procedures for characterizing the unit of drugs dispensed were not always standardized across facilities: some facilities tracked ARV drugs as pills, while others tracked them as bottles. Inconsistent units make it more difficult to collect consumption data and may cause errors in determining the quantity of new drugs to be ordered. An evaluation report on Kenya’s public health supply chain indicated that more than half the treatment sites visited by evaluators did not maintain or update stock cards for ARV drugs. This made it difficult for the facilities to know what new drugs to order and increased the risk of undetected loss or theft. Eight of the 16 country supply chain evaluations we reviewed cited human resource constraints as contributing to weaknesses in inventory controls and record keeping at health facilities, storage sites, or both. For example, heavy workload, inadequate training, and insufficient oversight may have led to poor performance by staff responsible for inventory control and record keeping, and in at least one country, their fear of shortages may have led to hoarding of drugs. Furthermore, poor compensation and working conditions may have led to absenteeism, turnover, and pilferage. Ten of the 35 fiscal year 2012 PEPFAR country and regional operational plans also identify human resource constraints that may contribute to inventory control and record keeping weaknesses, including, for example, a lack of qualified staff and difficulty retaining and motivating such staff because of heavy workloads and low salaries. In Côte d’Ivoire, for example, the PEPFAR country team reported that even trained pharmacists lacked basic skills in and understanding of supply chain management and logistics issues. As a result, they were often reluctant to fill out all the paper-based data collection sheets, registers, and other forms that were fundamental tools for tracking drug consumption in that country. Weaknesses in data systems can exacerbate human resource constraints and contribute to unreliable information at some treatment facilities. For example, according to a USAID official in Kenya, some rural facilities do not have electronic data collection systems or even a stable electricity supply and must therefore rely on paper-based systems. Such systems can be labor intensive to maintain and prone to error. In addition, a representative of an implementing partner in South Africa with whom we spoke noted that there can be significant data quality issues in paper records, particularly for facilities that are using paper records while trying to provide treatment to large volumes of patients and as changes in treatment guidelines expand the number of patients eligible for treatment. In at least one instance, according to an evaluation we reviewed, a computerized information system was not well integrated into the record- keeping process, and this resulted in errors. Addressing these errors can add to staff workload. PEPFAR’s ongoing training initiatives and technical assistance efforts have begun to address some of these human resource challenges. For example, the supply chain assessment tools are being used to help identify and address supply chain weaknesses, including inventory control and record-keeping weaknesses, in the countries where they are being piloted. An SCMS official implementing the Supply Chain Capability Maturity Model in South Africa stated that part of the process is getting staff and managers to understand that the information generated by the tasks they perform is important for managing their supply chain; the official said that this is because the information feeds into or flows from what their colleagues do, and there are consequences (e.g., shortages) if information management tasks are not performed properly. Expanding the application of these supply chain assessment tools to additional countries will take several years, according to OGAC officials. The evaluations and country operational plans we reviewed identify various other training and recruitment efforts to address human resource constraints in supply chain management. Several of the evaluations and operational plans also cite efforts to make data collection and sharing more efficient by enhancing automation or moving to Internet-based information management systems. According to a USAID official, the training and technical assistance under way are long-term efforts, whose results will not begin to be apparent for at least 5 years. Although 11 of the 16 evaluations we reviewed highlight the risks of drug shortages, waste, and loss due to inadequate inventory controls, OGAC has not taken all of the steps in a risk management framework that are important for mitigating such risks. In particular, it has not required country teams to develop a plan for mitigating all of these risks or to track progress in mitigating them. OGAC country operational plan guidance calls on PEPFAR country teams to describe plans to assist their respective countries in developing effective and sustainable treatment programs. OGAC has generally instructed teams to promote the development of national HIV supply plans and strengthen partner countries’ ability to forecast, procure, manage, and distribute HIV-related commodities. However, OGAC does not specifically require country teams that support partner-country supply chains to develop and implement plans to strengthen partner countries’ inventory controls and record keeping to reduce the risks of shortages, waste, and loss in ARV drug supply chains. We reviewed country operational plans for the seven countries covered by the evaluations we analyzed and found that most of these documents discussed plans for improving inventory controls and record keeping to help countries reduce the risk of shortages. However, only two mentioned the risks of waste or loss in their discussions of these plans. Without plans that address all of the elements of risk to supply chains, OGAC cannot ensure that country teams are appropriately targeting assistance to avoid shortages, waste, and loss in partner-country supply chains. OGAC’s Next Generation Indicators Reference Guide requires country teams to collect information on progress partner countries are making in developing reliable supply chains. Specifically, the country teams are required to collect information on access to high-quality, low-cost medications generally, and more specifically on the percentage of treatment facilities that experienced ARV drug shortages in the previous 12 months. This guide also recommends that country teams track the percentage of ARV drug distribution sites that report on drug consumption and losses, but this is not a required indicator. According to OGAC, monitoring this information is important but not indispensable to basic program tracking. Six of the seven country operational plans we reviewed provided information on assessing or attempting to address shortages, and five of the seven discussed using ARV consumption data to do so. However, only two discussed using consumption data to help reduce the risk of waste or loss, and none provided information on the percentage of distribution sites that report on drug consumption and losses. Monitoring partner countries’ progress in measuring consumption, waste, and loss is vital to basic program tracking, because without data on progress in reducing waste and loss, OGAC cannot fully assess whether partner countries can operate supply chains independently and efficiently. This is increasingly important as partner countries are expected to assume greater responsibility for managing their supply chains. PEPFAR is at a critical juncture as it transitions from directly managing supply chains to primarily providing guidance and advice. PEPFAR has taken steps toward greater integration with partner-country health systems, overall health system strengthening, and greater partner-country responsibility for addressing HIV/AIDS. If PEPFAR can increase efficiency by reducing shortages, waste, and loss, it would be better able to expand treatment to more of the 23 million people in low- and middle- income countries living with HIV/AIDS and in need of treatment or who are in at-risk groups eligible for treatment. Because PEPFAR generally relinquishes control of the supply chain once the drugs reach a country’s central warehouse, it is essential that partner-country governments develop the capacity to manage their drug supply chains without excessive risks of shortages, waste, or loss of inventory. PEPFAR has strengthened supply chains in a number of ways and is continuing to take steps to make them more reliable and efficient. In addition, it is important that PEPFAR’s efforts to address partner countries’ human resource constraints are sustained over the long term to show results. However, at some distribution and treatment sites, weaknesses in inventory controls and record keeping limit the ability of some partner-country health systems to track consumption, putting them at risk for shortages, waste, and loss. OGAC does not require PEPFAR country teams to develop plans to address these weaknesses or to monitor progress in reducing these risks. Without such plans or monitoring, OGAC cannot fully assess partner-country progress toward the goal of self-sufficient supply chain management. To help ensure that drug supply chains in PEPFAR partner countries function efficiently and mitigate the risks of shortages and wasted and lost drugs, we recommend that the Secretary of State direct the U.S. Global AIDS Coordinator to take the following two actions: require that country teams develop and implement plans for assisting countries to address inadequate inventory controls and record keeping; and require that country teams track the progress partner countries are making in measuring ARV drug consumption, waste, and loss. We provided a draft of this report to State, USAID, and CDC. Responding jointly with USAID and CDC, State provided written comments (see app. II for a copy of these comments). State and USAID also provided technical comments, which we incorporated, as appropriate. State agreed with the intent of our first recommendation to improve partner countries’ inventory controls and record keeping for drug supply chain management. State agreed that inventory controls are not optimized in all PEPFAR countries and indicated that it will further assess these controls and focus technical assistance on improving them where they are found lacking. State also noted that PEPFAR has engaged with partner countries in supply chain capacity development through careful assessment of each country’s supply chain context and the degree to which PEPFAR is positioned in the country to support a long-term technical assistance effort. Noting that PEPFAR operates in many different environments and supports a range of HIV/AIDS activities with diverse sets of stakeholders, State commented that PEPFAR country teams should continue to place supply chain improvement as a high program priority in countries where PEPFAR has a large financial investment in supporting HIV treatment; in countries where PEPFAR’s investment is more limited, State commented that country teams should work with other donors and the partner government to ensure that any supply chain weaknesses or risks, including those related to inventory and record keeping, are addressed. State also commented that there are other, often greater supply chain weaknesses that result in an inadequate supply of ARV drugs, such as delayed Global Fund disbursements or poor procurement planning by partner-country governments. We found that these delays and planning issues are significant challenges and note that PEPFAR is already taking steps to address them through, for example, developing an information-sharing network with other donors to identify and address potential gaps in supply and establishing an emergency procurement mechanism to fill these gaps. However, PEPFAR guidance does not explicitly address the need for plans to improve inventory controls and record keeping to mitigate the risks of waste and loss as ARV drugs move through the supply chain in partner countries to the patients who need them. We believe that such plans are necessary to ensure that efforts to mitigate these risks are systematically implemented and progress in mitigating them is documented. In the draft we sent for comment, we recommended that country teams track the percentage of ARV distribution sites reporting on inventory consumption, waste, and loss. State agreed with the intent of this recommendation, but noted some constraints that would make it difficult to implement: in countries where PEPFAR works with a large number of treatment sites, it would be costly to collect data from all of them; and in countries where PEPFAR provides limited support, requiring site-level data collection could be perceived as overly onerous by partner-country governments and at odds with PEPFAR’s efforts to promote country ownership of supply chain management. Nevertheless, tracking the percentage of ARV drug distribution sites reporting on inventory consumption, waste, and loss is an indicator that PEPFAR currently recommends that country teams implement, although it does not require them to do so. State proposed ways to deal with the constraints it identified, such as sampling site data and working with partner countries like South Africa to provide targeted technical assistance where needed. Specifically, State noted that PEPFAR has begun a more systematic investment in health supply chain metrics to identify risks and weaknesses in partner-country supply chains and assess progress in reducing risks and enhancing performance. State further noted that, as PEPFAR reviews and updates its guidance, it will incorporate measures to evaluate the capability of partner-country supply chains to identify risks and assess progress, and that new indicators will include inventory management. In response to State’s comments, we revised our recommendation to reflect that another, more flexible indicator besides the one OGAC had already developed may also be appropriate. We believe that requiring country teams to track the progress partner countries are making in measuring ARV drug consumption, waste, and loss in whatever way is most appropriate in those countries would be beneficial in two ways: (1) it would provide a measure of accountability for partner countries as they transition to assuming greater responsibility for managing their supply chains, and (2) it would provide OGAC with flexibility in the differing contexts of PEPFAR involvement in each country. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of State and the U.S. Global AIDS Coordinator and interested congressional committees. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov, or contact Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This is one of three reports responding to a congressional request to review HIV/AIDS treatment programs supported through the President’s Emergency Program for AIDS Relief (PEPFAR). This report examines (1) actions PEPFAR has taken regarding antiretroviral (ARV) drug supply chains and (2) partner-country ARV drug supply chain operations. To obtain background information and establish a framework for understanding drug supply chains serving PEPFAR, we identified key participants in PEPFAR supply chain management programs by reviewing laws and regulations relating to these programs. We also identified six supply chain elements and three processes common to effective drug supply chains by analyzing guidance documents on supply chain performance assessment processes, tools, and metrics, and an expert review focused on supply chain best practices. Specifically, we analyzed seven guidance documents obtained from PEPFAR implementing partners that procure drugs and one produced by the World Health Organization in conjunction with the U.S. government and other multilateral organizations. The expert review we also analyzed was a best-practices supply chain study for the U.S. Agency for International Development (USAID). We determined that a supply chain element or process was key if the same, or similar, element or process appeared in at least four of the nine sources we reviewed. To examine actions PEPFAR has taken regarding ARV drug supply chains, as well as to describe partner-country ARV drug supply chain operations, we reviewed reports and guidance issued by PEPFAR and its implementing partners, including all 35 PEPFAR country and regional operational plans and all 22 PEPFAR country partnership frameworks for fiscal year 2012. We also reviewed studies, reports, and assessment tools on the drug supply chains used by PEPFAR-supported treatment programs that were prepared by USAID; the Centers for Disease Control and Prevention (CDC); key implementing partners such as the Supply Chain Management System (SCMS); multilateral agencies such as the Global Fund to Fight AIDS, Tuberculosis, and Malaria and the World Health Organization; and PEPFAR partner countries. Furthermore, to examine partner-country supply chain operations, we analyzed selected evaluations relevant to PEPFAR-supported supply chains and synthesized their findings, conclusions, and recommendations. To select these evaluations, we reviewed documents obtained through site visits, USAID Office of the Inspector General audits, and a database of evaluations compiled for a related GAO engagement. We identified 68 evaluations published from 2008 through 2012 containing some assessment of drug supply chain systems or a selection of components in those systems in countries supported by PEPFAR. We then eliminated all evaluations published before 2011, yielding 16 evaluations. Our final set of 16 evaluations included findings, recommendations, and/or actions taken related to supply chains in these seven countries: China, Côte D’Ivoire, Kenya, Mozambique, South Africa, Uganda, and Zambia. We analyzed each of the 16 evaluations to identify any findings, recommendations, or actions taken related to the six supply chain elements and three processes we had previously identified. We used this analysis to identify any weaknesses within each supply chain function and related actions taken to address them. We also reviewed fiscal year 2012 PEPFAR country operational plans for all 32 countries and three regions that prepared these plans in that year. We compared the actions PEPFAR took to strengthen supply chains with performance metrics and supply chain best practices identified in USAID reports. Specifically, we identified performance metrics, best practices, lessons learned, and supply chain models in relevant guidance, evaluations, and assessment tools. Furthermore, we compared the actions to five basic guiding principles of risk management: (1) management and personnel identify risks; (2) they analyze risks; (3) after analyzing the risks, they create a plan that identifies different possible courses of action to mitigate the identified risk; (4) when a plan for risk mitigation is approved, management and personnel implement the risk mitigation action plan; and (5) they track risks and mitigation action plan implementation to determine if the plan was successful in mitigating the risk. We also compared the actions with Standards for Internal Control in the Federal Government, which identify risk management as critical to the ability of managers to run organizations and achieve their objectives. We searched the country operational plans for the seven countries covered by the evaluations we reviewed as well as PEPFAR guidance for developing operational plans to determine whether these documents included plans to mitigate the risks identified by our analysis of the evaluations. In addition, we conducted fieldwork in three PEPFAR partner-countries— Kenya, South Africa, and Uganda—in June 2012 to obtain information on drug supply chain operations. We selected these countries based on their relatively large PEPFAR budget and spending allocations, relatively high disease burden estimates, variation in PEPFAR’s role in supply chain management, and other factors, including feasibility of travel. We selected these countries from a list of countries with the largest PEPFAR budgets, including those with greater than $100 million in their annual budgets and greater than $25 million spent on ARV drug procurement between fiscal years 2009 and 2010. We further narrowed our sample by limiting the selection to countries with the highest HIV prevalence rate using April 2012 estimates from the Joint United Nations Programme on HIV/AIDS. By applying these criteria, we obtained a list of four countries. We then selected South Africa and Kenya because they had SCMS regional distribution centers, and we selected Uganda because it was the only remaining country that could provide examples of non-SCMS procurement models. We interviewed representatives of SCMS, the contractor that manages the bulk of PEPFAR’s ARV drug procurement; representatives of PEPFAR implementing agencies, including officials from OGAC, USAID, and CDC in the countries we visited and in Washington, D.C. We also interviewed partner-country government officials in each of the three countries. The results of our fieldwork cannot be generalized to all PEPFAR partner countries but provided insights into various aspects of specific supply chain operations. We conducted this performance audit from May 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Jim Michels, Assistant Director; Kay Halpern; Katherine Forsyth; Erika Navarro; Chad Davenport; David Dayton; and Steven Putansu made key contributions to this report. In addition, Todd M. Anderson, Brian Hackney, Etana Finkler, Grace Lui, and Jane Whipple provided technical assistance and other support. President’s Emergency Plan for AIDS Relief: Shift Toward Partner- Country Treatment Programs Will Require Better Information on Results. GAO-13-460. Washington, D.C.: April 12, 2013. President’s Emergency Plan for AIDS Relief: Per-Patient Costs Have Declined Substantially, but Better Cost Data Would Help Efforts to Expand Treatment. GAO-13-345. Washington, D.C.: March 15, 2013. Ensuring Drug Quality in Global Health Programs. GAO-12-897R. Washington, D.C.: August 1, 2012. Defense Infrastructure: The Navy’s Use of Risk Management at Naval Stations Mayport and Norfolk. GAO-12-710R. Washington, D.C.: July 13, 2012. President’s Emergency Plan for AIDS Relief: Agencies Can Enhance Evaluation Quality, Planning, and Dissemination. GAO-12-673. Washington, D.C.: May 31, 2012. President’s Emergency Plan for AIDS Relief: Program Planning and Reporting. GAO-11-785. Washington, D.C.: July 29, 2011. Global Health: Trends in U.S. Spending for Global HIV/AIDS and Other Health Assistance in Fiscal Years 2001-2008. GAO-11-64. Washington, D.C.: October 8, 2010. President’s Emergency Plan for AIDS Relief: Efforts to Align Programs with Partner Countries’ HIV/AIDS Strategies and Promote Partner Country Ownership. GAO-10-836. Washington, D.C.: September 20, 2010. President’s Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 15, 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2, 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President’s Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 4, 2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 10, 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided under U.S. Emergency Plan Is Limited. GAO-05-133. Washington, D.C.: January 11, 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: June 12, 2004. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 7, 2003. | PEPFAR, first authorized in 2003, has supported significant advances in HIV/AIDS prevention, treatment, and care in over 30 countries, including directly supporting treatment for about 5.1 million people; however, millions more people still need treatment. PEPFAR has allocated more than half of its funding to care and treatment and has spent over $1.2 billion to purchase ARV drugs. In addition to supplying ARV drugs directly in some countries, PEPFAR also helps partner countries manage their drug supply chains. GAO was asked to review PEPFARsupported ARV drug supply chains. GAO examined (1) actions PEPFAR has taken regarding ARV drug supply chains and (2) partner-country ARV drug supply chain operations. GAO reviewed PEPFAR and the U.S. Agency for International Development (USAID) guidance and supply chain studies; analyzed 16 supply chain evaluations conducted in seven countries and published in 2011 and 2012; interviewed officials from OGAC, USAID, and other agencies; and conducted fieldwork in three countries selected on the basis of program size and other factors. The President's Emergency Plan for AIDS Relief (PEPFAR) has worked with U.S. implementing agencies, international donors, and partner countries to increase the efficiency and reliability of antiretroviral (ARV) drug supply chains. It has done so by improving drug supply planning and procurement as well as incountry distribution of drugs. First, PEPFAR has consolidated supply chains for ARV drug procurement for more than 20 partner countries to enhance efficiency and reduce costs and has begun further consolidation with other U.S. global health programs. Second, PEPFAR has improved coordination among donors by creating an information-sharing network to help detect and resolve supply gaps and other supply chain weaknesses and by developing an emergency drug procurement mechanism. Third, PEPFAR has provided partner countries with technical assistance, such as assessment tools and training, to help them better manage drug supply planning, procurement, and distribution. Evaluations of partner-country supply chains reflect weaknesses in inventory controls and record keeping, which may increase the risk of drug shortages, waste, and loss. The Department of State's Office of the U.S. Global AIDS Coordinator (OGAC) has issued guidance for PEPFAR emphasizing the importance of effective information management for efficient ARV drug supply chain operations. However, 11 of the 16 supply chain evaluations GAO reviewed cited weaknesses in partner countries' inventory controls; 7 of these 11 evaluations also cited weaknesses in record keeping, including incomplete or inaccurate data on the consumption of ARV drugs. These weaknesses can increase the risks of drug shortages, waste, and loss of inventory. In one country, an evaluation team identified losses valued at about $265,000. Human resource constraints contribute to these weaknesses, and PEPFAR is addressing them through technical assistance and training. However, OGAC does not require PEPFAR interagency teams in each country to develop plans to strengthen inventory controls and record keeping. Nor does OGAC require country teams to track the progress partner countries are making in measuring ARV drug consumption, waste, and loss. Thus, OGAC cannot ascertain the extent of partner-country supply chain weaknesses and take appropriate action to mitigate risks. For PEPFAR and partner countries to continue expanding treatment programs to serve up to 23 million eligible people, further improving drug supply chains is critical, particularly the efficiency of elements managed by partner countries. These improvements will become increasingly important as partner countries assume more responsibility for managing supply chains. The Secretary of State should direct OGAC to require country teams to (1) develop and implement plans to help partner countries improve inventory controls and record keeping; and (2) track the progress partner countries are making in measuring ARV drug consumption, waste, and loss. State generally agreed with the intent of both recommendations; GAO revised the second to make it broader and more feasible to implement in differing partner-country contexts. |
In 1972, the Congress established the Construction Grants Program to provide grants to help local governments construct wastewater treatment facilities. These federal grants provided most of the funding for these projects, with the remainder provided by the local government constructing the project. In 1987, the Congress began to phase out that program and authorized the creation of SRFs, which provide loans to local governments and others. The states are required to match SRF capitalization grants at a rate of at least one state dollar for every five federal dollars. The states have the option of increasing the amount of SRF funds available to lend by issuing bonds guaranteed by the money in the SRFs. According to a national survey, as of June 30, 1995 (the latest data available), the states collectively had $18.9 billion in their SRF accounts; over one-half of this amount (approximately $11 billion) was provided by federal capitalization grants. (The appendix provides additional information on the nine states’ sources and uses of funds.) For the most part, the Congress gave the states flexibility to develop SRF loan assistance programs that meet their particular needs. However, the states must ensure that the projects funded with loans issued up to the amount of the federal capitalization grants meet two types of federal requirements. The first type includes those contained in the various statutes that apply generally to federal grant programs. These requirements—also called “cross-cutting” authorities—promote national policy goals, such as equal employment opportunity and participation by minority-owned businesses. The second type applies various provisions applicable to the Construction Grants Program (known as title II requirements because that program was authorized by title II of the Federal Water Pollution Control Act Amendments of 1972). These include compliance with the federal prevailing-wage requirement. The title II requirements apply only to those projects wholly or partially built before fiscal year 1995 with funds made directly available by federal capitalization grants. The transfer of federal funds to SRFs begins when the Congress appropriates funds annually to EPA. EPA then allots capitalization grants to the individual states, generally according to percentages specified in the Clean Water Act. To receive its allotment, a state has up to 2 years to apply for its capitalization grant. In order to apply, a state must, among other things, propose a list of potential projects to solve water quality problems and receive public comments on that list. After completing the list and receiving its capitalization grant, a state generally has 2 years to receive payments of the grant amount (via increases in its letter of credit). After each such increase, a state has up to 1 year to enter into binding commitments to fund specific projects. Next, a binding commitment is typically converted into a loan agreement. We collected detailed information on the use of revolving funds by nine states with SRF programs—Arizona, Florida, Illinois, Louisiana, Maryland, Missouri, Oregon, Pennsylvania, and Texas. We selected these states because they provide diversity in terms of the size and complexity of their SRF programs and other factors, such as geographic location. However, the conditions in these states are not necessarily representative of the conditions in all 51 SRFs. We used a questionnaire and follow-up discussions to collect information on SRF activities and finances from program officials from the nine states. We also interviewed EPA headquarters and regional officials who are responsible for the SRF program. We did not attempt to independently verify the information collected from EPA or the states. The data cited in this statement are as of the end of the applicable state’s fiscal year or the federal fiscal year, as appropriate. In seven of the nine states, the state fiscal year ends on June 30; in Texas, it ends on August 31; and in Florida, it ends on September 30, which is also the end of the federal fiscal year. The overall amount of funds lent by the nine states increased between 1995 and 1996, from $3.3 billion to $4.0 billion. The amount lent by each state also increased. During the same time period, seven states increased their percentage of funds lent, and two states maintained or decreased their percentage of funds lent. All nine states increased the amount of funds they lent between 1995 and 1996. Six states increased their amount by 15 to 29 percent. For example, Pennsylvania increased the amount lent by 17 percent, from $267 million to $311 million. The other three states increased their amount of funds lent by 30 percent or more. The largest change—95 percent—was in Arizona, which increased from $50 million to $99 million. Seven of the nine states increased their percentage of funds lent between 1995 and 1996. Three states increased their percentage by 17 percentage points or more. Four other states increased theirs by 2 to 9 percentage points. Finally, one state’s percentage stayed the same, and another state’s declined by 2 percentage points. Among the nine states, the percentage of funds lent at the end of 1996 ranged from 60 to 99 percent. Specifically, five states lent 80 percent or more of their available funds, another three states lent 70 to 79 percent, and the final state lent 60 percent. Officials in eight of the nine states cited one or more factors at the federal level as affecting the amount and percentage of funds they lent. In seven states, officials said that uncertainty about the reauthorization of the SRF program discouraged some potential borrowers. Also, in seven states, officials cited a concern about compliance with federal requirements, including possible increases in project costs because of a federal prevailing-wage requirement. Finally, in three states, officials identified other reasons, such as federal restrictions on the use of SRF funds. Officials in seven of the nine states said that the lack of reauthorization of the Clean Water Act limited their success in lending funds. Among other things, the lack of reauthorization made it difficult to assure the communities applying for loans that SRF funds would be available to finance their projects and created uncertainty among communities about the terms of their loans. Officials from the seven states generally agreed that the amount and timing of federal funding became more uncertain after the SRF program’s authorization expired at the end of September 1994. These officials said that, prior to 1994, they used the amounts in the authorizing legislation to help determine how much money they would have to lend each year. According to these officials, these amounts also helped reassure the communities that federal funding would be available for projects. These officials said that the uncertainty created by the lack of reauthorization made it difficult for states to schedule projects and assure the communities applying for loans that construction money would be available when needed. In addition, Pennsylvania officials said that the lack of reauthorization caused some communities to delay accepting SRF loans because they hoped for more favorable loan terms after the act was reauthorized. Specifically, the Congress has considered a proposal to extend the maximum term for an SRF loan, in certain cases, from 20 years to as much as 40 years and to provide lower interest rates. The state officials said that the communities were interested in both longer repayment periods and lower interest rates. According to a Pennsylvania official, several communities in the state had a loan approved by the state but had not formally accepted the loan. In three cases, local officials told us that they were delaying further action pending the act’s reauthorization; the total dollar value of the loans was about $15 million. The Pennsylvania official told us that small, low-income communities in particular would benefit from the proposal to lengthen the repayment period. For example, in March 1995 Pennsylvania approved a $3 million loan for Burrell Township, which has approximately 3,000 people. However, as of October 1996, the community had not accepted the loan on the chance that a reauthorized act would provide for a longer loan term and thus lower annual repayments. Officials in seven of the nine states said that compliance with the federal requirements made financing projects with SRF funds less attractive and, in some cases, caused communities to turn down SRF loans. In particular, five states raised concerns that a federal prevailing-wage requirement could make SRF-financed projects more expensive to construct than projects constructed with other funds. While the title II requirements—which include the federal prevailing-wage requirement—ceased to apply to new projects after October 1, 1994, state officials said they were concerned that these requirements would be reinstated in the reauthorization act. For example, an Arizona official said that the prevailing-wage requirement could inflate a project’s costs from 5 to 25 percent. A Louisiana official said that the community of East Baton Rouge Parish withdrew its 1990 SRF loan application for a project to serve about 120,000 people when it discovered that the prevailing-wage requirement would increase the labor cost of the project by more than $1.1 million—31 percent. Louisiana officials said that before the prevailing-wage requirement expired, the state had experienced difficulties in making loans largely because local officials perceived the requirement as increasing project costs. The officials said that Louisiana’s lending rate increased in part because the wage requirement expired. The state’s lending rate was 44 percent at the end of 1994, before the requirement expired; 62 percent at the end of 1995; and 79 percent at the end of 1996. EPA officials said they were aware that many states had a concern about the prevailing-wage requirement. They noted, however, that the requirement expired at the end of September 1994 and that the continued application of the requirement would be a state’s management decision. They also noted that, even before the requirement expired, it applied only to projects funded with federal capitalization grants (as opposed to projects funded solely with state matching or borrowed funds, for example). Also, they noted that some states have chosen to continue requiring projects to comply with the requirement, even though they are no longer required to do so; however, they said, both Arizona and Louisiana no longer apply the requirement to projects they fund. Officials from three states identified other factors at the federal level that constrained lending. These included the awarding of federal funds directly for selected communities and federal restrictions on the use of SRF funds. Maryland and Pennsylvania officials said that the earmarking of federal funds—not from the SRF program—for specific communities raised the expectation in other communities that if they waited long enough, they might also receive funds directly. This expectation reduced these communities’ incentive to apply for an SRF loan. For example, a Maryland official said that state SRF lending was limited by a congressional decision to provide federal funds directly for a project in Baltimore, which SRF officials had expected to finance. He said that the City of Baltimore turned down the SRF loan because it received $80 million in federal grant funds for the project in 1993 and 1994. The state official said that it took time to find other communities to borrow the money that was originally set aside for the Baltimore project. The state increased its percentage of funds lent from 61 percent at the end of 1995 to 70 percent at the end of 1996. Officials from Missouri said that certain federal restrictions on the use of SRF funds limit the amount of loans they can make. For example, a state official cited restrictions on financing the costs of acquiring land. Under the Clean Water Act, SRF loans cannot be made to purchase land unless the land itself is an integral part of the waste treatment processes. Thus, wetlands used to filter wastewater as part of the treatment process are an eligible expense under the act. However, other lands, such as the land upon which a treatment plant would be built, are not eligible. According to the official, because purchasing land for a wastewater treatment facility represents a large portion of the facility’s cost but is ineligible for SRF financing, some communities are discouraged from seeking SRF loans. In Pennsylvania and Arizona, the amount of funds lent was limited by decisions on how to manage the loan fund. These decisions related to how to use SRF funds in Pennsylvania and how to publicize the program in Arizona. Pennsylvania established a state-funded program, independent of the SRF, in March 1988 to help communities finance wastewater and other projects. In the early years of the SRF program, Pennsylvania officials decided to finance about $248 million in wastewater projects with these state funds rather than wait for SRF funding to become available, according to state officials. According to these officials, the state decided to fund these projects as soon as possible with state funds to reduce public health risks. For example, about $30 million was awarded to the City of Johnstown to upgrade an existing treatment plant and thereby prevent raw sewage overflows and inadequately treated wastewater from being discharged into surface waters. According to a state official, Pennsylvania’s percentage of funds lent would have been higher if the state had chosen to fund the $248 million in projects with SRF funds. In that case, he said, Pennsylvania’s total amount of funds lent through the end of 1996 would have been $558 million, instead of $310 million, and the state would have lent all available funds, instead of 60 percent of those funds. Likewise, in Arizona, state decisions limited the amount of funds lent. According to a state official, efforts to inform local government officials about the SRF program and interest them in participating were not effective in the program’s early years. This difficulty was compounded by restrictive provisions of state law that further limited the amount of SRF funds lent.The state official said that the outreach effort was refocused in 1995. He also noted that the approval of changes in state laws in 1995 and 1996 helped create a more positive atmosphere for outreach, even before the changes took effect. Arizona’s percentage of funds lent was 55 percent at the end of 1995 and 81 percent at the end of 1996. Under the Clean Water State Revolving Fund (SRF) Program, the states use funds from six primary sources to make loans for wastewater treatment and related projects. These are: state matching funds, borrowed funds, unused funds from the Construction Grants Program, repayments of loans, and earnings on invested funds. All nine states received federal grants and provided state matching funds. These two sources generally accounted for most of the money in the nine states’ revolving funds. Four of the nine states borrowed money for their revolving funds. Five states transferred unused funds from the old Construction Grants Program. All nine states received some loan repayments. Finally, eight states had investment earnings on loan repayments. Table I.1 shows the amount and sources of funding for the nine states we reviewed through each state’s fiscal year 1996. To determine the percentage of funds lent by each state as of the end of 1995 and 1996, we divided the total amount of funds lent by the total funds available to lend, as defined above, both as of the end of the year. This method was based on the approach used by the Ohio Water Development Authority in conducting annual SRF surveys during 1992 through 1995. Table I.2 shows the amount and percentage of funds lent for the nine states for each state’s fiscal year 1995 and 1996. Amount of funds lent (thousands of dollars) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed selected states' experience with the Environmental Protection Agency's (EPA) Clean Water State Revolving Fund program, focusing on: (1) the amount of funds lent and the percentage of available funds lent, as of the end of each state's fiscal year 1996; and (2) information on factors at the federal and state levels that constrained the amount and percentage of funds lent. GAO noted that: (1) the nine states increased the total amount of funds they lent from $3.3 billion in 1995 to $4.0 billion in 1996; (2) all nine states increased the amount they lent by 15 percent or more, and three states achieved increases of 30 percent or more; (3) in addition, seven of the nine states increased the percentage of available funds they lent; (4) of these seven, three states increased this proportion by 17 percentage points or more; (5) nevertheless, the percentage of funds lent as of the end of 1996 varied substantially among the nine states; (6) specifically, five states had lent 80 percent or more of their available funds, three states had lent between 70 and 79 percent, and one state had lent 60 percent; (7) in eight of the nine states, officials identified the expiration of the authorizing legislation, as well as federal requirements, as affecting the amount and percentage of funds lent; (8) for example, officials in seven states said the legislation's expiration created uncertainty about the loan conditions that might apply in the future and caused some communities to postpone seeking or accepting loans; (9) also, officials in seven states said that other federal requirements, such as a prevailing-wage provision, discouraged some communities from seeking loans; and (10) finally, in two states, officials said that state program decisions constrained lending. |
For decades, fingerprint analysis has been the most widely used biometric technology for positively identifying arrestees and linking them with any previous criminal record. Beginning in 2010, the FBI began incrementally replacing the Integrated Automated Fingerprint Identification System (IAFIS) with Next Generation Identification (NGI) at an estimated cost of $1.2 billion. NGI was not only to include fingerprint data from IAFIS and biographic data, but also to provide new functionality and improve existing capabilities by incorporating advancements in biometrics, such as face recognition technology. As part of the fourth of six NGI increments, the FBI updated the Interstate Photo System (IPS) to provide a face recognition service that allows law enforcement agencies to search a database of about 30 million photos to support criminal investigations. NGI-IPS users include the FBI and selected state and local law enforcement agencies, which can submit search requests to help identify an unknown person using, for example, a photo from a surveillance camera. When a state or local agency submits such a photo, NGI-IPS uses an automated process to return a list of 2 to 50 possible candidate photos from the database, depending on the user’s specification. Figure 1 describes the process for a search requested by state or local law enforcement. In addition to the NGI-IPS, the FBI has an internal unit called Facial Analysis, Comparison and Evaluation (FACE) Services that provides face recognition capabilities, among other things, to support active FBI investigations. FACE Services not only has access to NGI-IPS, but can search or request to search databases owned by the Departments of State and Defense and 16 states, which use their own face recognition systems. Figure 2 shows which states partnered with FBI for FACE Services requests, as of August 2016. Unlike NGI-IPS, which primarily contains criminal photos, these external systems primarily contain civil photos from state and federal government databases, such as driver’s license photos and visa applicant photos. The total number of face photos available in all searchable repositories for FACE Services is over 411 million, and the FBI is interested in adding additional federal and state face recognition systems to its search capabilities. Biometric images specialists for FACE Services manually review candidate photos from their external partners before returning at most the top 1 or 2 photos as investigative leads to the requesting FBI agents. However, according to FACE Services officials, if biometric images specialists determine that none of the databases returned a likely match, they do not return any photos to the agents. Federal agency collection and use of personal information, including face images, is governed primarily by two laws: the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public through a system of records notice (SORN) in the Federal Register. According to the Office of Management and Budget (OMB) guidance, the purposes of the notice are to inform the public of the existence of systems of records; the kinds of information maintained; the kinds of individuals on whom information is maintained; the purposes for which they are used; and how individuals can exercise their rights under the Privacy Act. The E-Government Act of 2002 requires that agencies conduct Privacy Impact Assessments (PIAs) before developing or procuring information technology (or initiating a new collection of information) that collects, maintains, or disseminates personal information. The assessment helps agencies examine the risks and effects on individual privacy and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. OMB guidance also requires agencies to perform and update PIAs as necessary where a system change creates new privacy risks, for example, when the adoption or alteration of business processes results in personal information in government databases being merged, centralized, matched with other databases or otherwise significantly manipulated. Within the Department of Justice (DOJ), preserving civil liberties and protecting privacy is a responsibility shared by department level offices and components. As such, DOJ and the FBI have established oversight structures to help protect privacy and oversee compliance with statutory requirements. For example, while the FBI drafts privacy documentation for its face recognition capabilities, DOJ offices review and approve key documents developed by the FBI—such as PIAs and SORNs. However, the FBI did not update the NGI-IPS PIA in a timely manner when the system underwent significant changes and did not develop and publish a PIA for FACE Services before that unit began supporting FBI agents. Additionally, DOJ did not publish a SORN that addresses the collection and maintenance of photos accessed and used through the FBI’s face recognition capabilities until after our 2016 review. Consistent with the E-Government Act and OMB guidance, DOJ developed guidance that requires initial PIAs to be completed at the beginning of development of information systems and any time there is a significant change to the information system in order to determine whether there are any resulting privacy issues. DOJ published a PIA at the beginning of the development of NGI-IPS in 2008, as required. However, the FBI did not publish a new PIA or update the 2008 PIA before beginning to pilot NGI-IPS in December 2011 or as significant changes were made to the system through September 2015. During that time, the FBI used NGI-IPS to conduct over 20,000 searches to assist in investigations throughout the pilot. Similarly, DOJ did not approve a PIA for FACE Services when it began supporting investigations in August 2011. As a new use of information technology involving the handling of personal information, it too, required a PIA. Figure 3 provides key dates in the implementation of these face recognition capabilities and the associated privacy notices. During the course of our review, DOJ approved the NGI-IPS PIA in September 2015 and the FACE Services PIA in May 2015—over three years after the NGI-IPS pilot began and FACE Services began supporting FBI agents with face recognition services. DOJ and FBI officials stated that these PIAs reflect the current operation of NGI-IPS and FACE Services. However, as the internal drafts of these PIAs were updated, the public remained unaware of the department’s consideration for privacy throughout development of NGI-IPS and FACE Services. This is because the updates were not published, as required. Specifically, delays in the development and publishing of up-to-date PIAs for NGI-IPS and FACE Services limited the public’s knowledge of how the FBI uses personal information in the face recognition search process. Additionally, DOJ did not publish a SORN, as required by the Privacy Act, that addresses the collection and maintenance of photos accessed and used through the FBI’s face recognition capabilities until May 5, 2016— after completion of our review. At that time, the FBI published a new SORN that reported the modification of the Fingerprint Identification Records System to be renamed the Next Generation Identification (NGI) System. However, according to OMB guidance then in effect, the SORN must appear in the Federal Register before the agency begins to operate the system, e.g., collect and use the information. While the new SORN addresses face recognition, those capabilities have been in place since 2011. Throughout this period, the agency collected and maintained personal information for these capabilities without the required explanation of what information it is collecting or how it is used. Completing and publishing SORNs in a timely manner is critical to providing transparency to the public about the personal information agencies plan to collect and how they plan to use the information. In our May 2016 report, we made two recommendations to DOJ regarding its processes to develop privacy documentation, and DOJ officials disagreed with both. We recommended that DOJ assess the PIA development process to determine why PIAs were not published prior to using or updating face recognition capabilities. DOJ officials did not concur with this recommendation, and stated that the FBI has established practices that protect privacy and civil liberties beyond the requirements of the law. Further, DOJ stated that it developed PIAs for both FACE Services and NGI-IPS, as well as other privacy documentation, throughout the development of the these capabilities that reflect privacy choices made during their implementation. For example, DOJ officials stated that it revised the FACE Services PIA as decisions were made. We agree that, during the course of our review, DOJ published PIAs for both FACE Services and NGI-IPS. However, as noted in the report, according to the E-Government Act and OMB and DOJ guidance, PIAs are to be assessments performed before developing or procuring technologies and upon significant system changes. Further, DOJ guidance states that PIAs give the public notice of the department’s consideration of privacy from the beginning stages of a system’s development throughout the system’s life cycle and ensures that privacy protections are built into the system from the start–not after the fact–when they can be far more costly or could affect the viability of the project. In its response to our draft report, DOJ officials stated that it will internally evaluate the PIA process as part of the Department’s overall commitment to improving its processes, not in response to our recommendation. In March 2017, we followed up with DOJ to obtain its current position on our recommendation. DOJ continues to believe that its approach in designing the NGI system was sufficient to meet legal privacy requirements and that our recommendation represents a “checkbox approach” to privacy. We disagree with DOJ’s characterization of our recommendation. We continue to believe that the timely development and publishing of future PIAs would increase transparency of the department’s systems. We recognize the steps the agency took to consider privacy protection during the development of the NGI system. We also stand by our position that notifying the public of these actions is important and provides the public with greater assurance that DOJ components are evaluating risks to privacy when implementing systems. We also recommended DOJ develop a process to determine why a SORN was not published for the FBI’s face recognition capabilities prior to using NGI-IPS, and implement corrective actions to ensure SORNs are published before systems become operational. DOJ agreed, in part, with our recommendation and submitted the SORN for publication after we provided our draft report for comment. However: DOJ did not agree that the publication of a SORN is required by law. We disagree with DOJ’s interpretation regarding the legal requirements of a SORN. The Privacy Act of 1974 requires that when agencies establish or make changes to a system of records, they must notify the public through a SORN published in the Federal Register. DOJ’s comments on our draft report acknowledge that the automated nature of face recognition technology and the sheer number of photos now available for searching raise important privacy and civil liberties considerations. DOJ officials also stated that the FBI’s face recognition capabilities do not represent new collection, use, or sharing of personal information. We disagree. We believe that the ability to perform automated searches of millions of photos is fundamentally different in nature and scope than manual review of individual photos, and the potential impact on privacy is equally fundamentally different. By assessing the SORN development process and taking corrective actions to ensure timely development of future SORNs, the public would have a better understanding of how personal information is being used and protected by DOJ components. The Criminal Justice Information Services (CJIS), which operates FBI’s face recognition capabilities, has an audit program to evaluate compliance with restrictions on access to CJIS systems and information by its users, such as the use of fingerprint records. However, at the time of our review, it had not completed audits of the use of NGI-IPS or FACE Services searches of external databases. State and local users have been accessing NGI-IPS since December 2011 and have generated IPS transaction records since then that would enable CJIS to assess user compliance. In addition, the FACE Services Unit has used external databases that include primarily civil photos to support FBI investigations since August 2011, but the FBI had not audited its use of these databases. Standards for Internal Control in the Federal Government call for federal agencies to design and implement control activities to enforce management’s directives and to monitor the effectiveness of those controls. In 2016, we recommended that the FBI conduct audits to determine the extent to which users of NGI-IPS and biometric images specialists in FACE Services are conducting face image searches in accordance with CJIS policy requirements. DOJ partially concurred with our recommendation. Specifically, DOJ concurred with the portion of our recommendation related to the use of NGI-IPS. DOJ officials stated that the FBI specified policy requirements with which it could audit NGI-IPS users in late 2014, completed a draft audit plan during the course of our review in summer 2015, and expects to begin auditing use of NGI-IPS in fiscal year 2016. As of March 2017, DOJ reported that the CJIS Audit Unit began assessing NGI-IPS requirements at participating states in conjunction with its triennial National Identity Services audit and that as of February 2017, the unit had conducted NGI-IPS audits of four states. At the time we issued our 2016 report, DOJ officials did not fully comment on the portion of our recommendation that the FBI audit the use of external databases, because FBI officials said the FBI does not have authority to audit these systems. As noted in the report, we understand the FBI may not have authority to audit the maintenance or operation of databases owned and managed by other agencies. However, the FBI does have a responsibility to oversee the use of the information by its own employees. As a result, our recommendation focuses on auditing both NGI-IPS users, such as states and FACE Services employees, as well as FACE Services employees’ use of information received from external databases—not on auditing the external databases. We continue to believe that the FBI should audit biometric images specialists’ use of information received from external databases to ensure compliance with FBI privacy policies and to ensure images are not disseminated for unauthorized purposes or to unauthorized recipients. In March 2017, DOJ provided us with the audit plan the CJIS Audit Unit developed in June 2016 for NGI-IPS users. DOJ officials said CJIS developed an audit plan of the FACE Services Unit to coincide with the existing triennial FBI internal audit for 2018. However, DOJ did not provide the audit plan for the FACE Services Unit. DOJ officials said the methodology would be the same as the audit plan for NGI-IPS, but that methodology does not describe oversight on use of information obtained from external systems accessed by FACE Services employees. Therefore, we believe DOJ is making progress towards meeting, but has not fully implemented our recommendation. In May 2016, we reported that prior to accepting and deploying NGI-IPS, the FBI conducted testing to evaluate how accurately face recognition searches returned matches to persons in the database. However, the tests were limited because they did not include all possible candidate list sizes and did not specify how often incorrect matches were returned. According to the National Science and Technology Council and the National Institute of Standards and Technology, the detection rate (how often the technology generates a match when the person is in the database) and the false positive rate (how often the technology incorrectly generates a match to a person in the database) are both necessary to assess the accuracy of a face recognition system. The FBI’s detection rate requirement for face recognition searches states when the person exists in the database, NGI-IPS shall return a match of this person at least 85 percent of the time (the detection rate). However, the FBI only tested this requirement with a candidate list of 50 potential matches. In these tests, according to FBI documentation, 86 percent of the time, a match to a person in the database was correctly returned. Further, FBI officials stated that they have not assessed how often NGI-IPS face recognition searches erroneously match a person to the database (the false positive rate). As a result, we recommended that the FBI conduct tests of NGI-IPS to verify that the system is sufficiently accurate for all allowable candidate list sizes and ensure that both the detection rate and the false positive rate are identified for such tests. With the recommended testing, the FBI would have more reasonable assurance that NGI-IPS provides investigative leads that help enhance, rather than hinder or overly burden, criminal investigation work. If false positives are returned at a higher than acceptable rate, law enforcement users may waste time and resources pursuing unnecessary investigative leads. In addition, the FBI would help ensure that it is sufficiently protecting the privacy and civil liberties of U.S. citizens enrolled in the database. Specifically, according to a July 2012 Electronic Frontier Foundation hearing statement, false positives can alter the traditional presumption of innocence in criminal cases by placing more of a burden on the defendant to show he is not who the system identifies him to be. The Electronic Frontier Foundation argues that this is true even if a face recognition system such as NGI-IPS provides several matches instead of one, because each of the potentially innocent individuals identified could be brought in for questioning. In comments on our draft report in 2016, and reiterated during recommendation follow-up, as of March 2017, DOJ did not concur with this recommendation. DOJ officials stated that the FBI has performed accuracy testing to validate that the system meets the requirements for the detection rate, which fully satisfies requirements for the investigative lead service provided by NGI-IPS. We disagree with DOJ. A key focus of our recommendation is the need to ensure that NGI-IPS is sufficiently accurate for all allowable candidate list sizes. Although the FBI has tested the detection rate for a candidate list of 50 photos, NGI-IPS users are able to request smaller candidate lists— specifically between 2 and 50 photos. FBI officials stated that they do not know, and have not tested, the detection rate for other candidate list sizes. According to these officials, a smaller candidate list would likely lower the detection rate because a smaller candidate list may not contain a likely match that would be present in a larger candidate list. However, according to the FBI Information Technology Life Cycle Management Directive, testing needs to confirm the system meets all user requirements. Because the accuracy of NGI-IPS’s face recognition searches when returning fewer than 50 photos in a candidate list is unknown, the FBI is limited in understanding whether the results are accurate enough to meet NGI-IPS users’ needs. DOJ officials also stated that searches of NGI-IPS produce a gallery of likely candidates to be used as investigative leads, not for positive identification. As a result, according to DOJ officials, NGI-IPS cannot produce false positives and there is no false positive rate for the system. We disagree with DOJ. The detection rate and the false positive rate are both necessary to assess the accuracy of a face recognition system. Generally, face recognition systems can be configured to allow for a greater or lesser number of matches. A greater number of matches would generally increase the detection rate, but would also increase the false positive rate. Similarly, a lesser number of matches would decrease the false positive rate, but would also decrease the detection rate. Reporting a detection rate of 86 percent without reporting the accompanying false positive rate presents an incomplete view of the system’s accuracy. FBI, DOJ, and OMB guidance all require annual reviews of operational information technology systems to assess their ability to continue to meet cost and performance goals. For example, the FBI’s Information Technology Life Cycle Management Directive requires an annual operational review to ensure that the fielded system is continuing to support its intended mission, among other things. In 2016, we reported that the FBI had not assessed the accuracy of face recognition searches of NGI-IPS in its operational setting—the setting in which enrolled photos, rather than a test database of photos—are used to conduct a search for investigative leads. According to FBI officials, the database of photos used in its tests is representative of the photos in NGI-IPS, and ongoing testing in a simulated environment is adequate. However, according to the National Institute of Standards and Technology, as the size of a photo database increases, the accuracy of face recognition searches performed on that database can decrease due to lookalike faces. FBI’s test database contains 926,000 photos while NGI-IPS contains about 30 million photos. As a result, we recommended the FBI conduct an operational review of NGI-IPS at least annually that includes an assessment of the accuracy of face recognition searches to determine if it is meeting federal, state, and local law enforcement needs and take actions, as necessary, to improve the system. In 2016, DOJ concurred with this recommendation. As of March 2017, FBI officials stated they implemented the recommendation by submitting a paper to solicit feedback from users through the Fall 2016 Advisory Policy Board Process. Specifically, officials said the paper requested feedback on whether the face recognition searches of the NGI-IPS are meeting their needs, and input regarding search accuracy. According to FBI officials, no users expressed concern with any aspect of the NGI-IPS meeting their needs, including accuracy. Although FBI’s action of providing working groups with a paper presenting GAO’s recommendation is a step, FBI’s actions do not fully meet the recommendation. FBI’s paper was presented as informational, and did not result in any formal responses from users. We disagree with FBI’s conclusion that receiving no responses on the informational paper fulfills the operational review recommendation, which includes determining that NGI-IPS is meeting user’s needs. As such, we continue to recommend the FBI conduct an operational review of NGI-IPS at least annually. In 2016 we reported that FBI officials did not assess the accuracy of face recognition systems operated by external partners. Specifically, before agreeing to conduct searches on, or receive search results from, these systems, the FBI did not ensure the accuracy of these systems was sufficient for use by FACE Services. Standards for Internal Controls in the Federal Government call for agencies to design and implement components of operations to ensure they meet the agencies mission, goals, and objectives, which, in this case, is to identify missing persons, wanted persons, suspects, or criminals for active FBI investigations. As a result, we recommended the FBI take steps to determine whether each external face recognition system used by FACE Services is sufficiently accurate for the FBI’s use and whether results from those systems should be used to support FBI investigations. In comments on our draft report in 2016, and reiterated during recommendation follow-up in 2017, DOJ officials did not concur with this recommendation. DOJ officials stated that the FBI has no authority to set or enforce accuracy standards of face recognition technology operated by external agencies. In addition, DOJ officials stated that the FBI has implemented multiple layers of manual review that mitigate risks associated with the use of automated face recognition technology. Further, DOJ officials stated there is value in searching all available external databases, regardless of their level of accuracy. We disagree with the DOJ position. We continue to believe that the FBI should assess the quality of the data it is using from state and federal partners. We acknowledge that the FBI cannot and should not set accuracy standards for the face recognition systems used by external partners. We also do not dispute that the use of external face recognition systems by the FACE Services Unit could add value to FBI investigations. However, we disagree with FBI’s assertion that no assessment of the quality of the data from state and federal partners is necessary. We also disagree with the DOJ assertion that manual review of automated search results is sufficient. Even with a manual review process, the FBI could miss investigative leads if a partner does not have a sufficiently accurate system. The FBI has entered into agreements with state and federal partners to conduct face recognition searches using over 380 million photos. Without actual assessments of the results from its state and federal partners, the FBI is making decisions to enter into agreements based on assumptions that the search results may provide valuable investigative leads. For example, the FBI’s accuracy requirements for criminal investigative purposes may be different than a state’s accuracy requirements for preventing driver’s license fraud. By relying on its external partners’ face recognition systems, the FBI is using these systems as a component of its routine operations and is therefore responsible for ensuring the systems will help meet FBI’s mission, goals and objectives. Until FBI officials can assure themselves that the data they receive from external partners are reasonably accurate and reliable, it is unclear whether such agreements are beneficial to the FBI, whether the investment of public resources is justified, and whether photos of innocent people are unnecessarily included as investigative leads. Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. I would be happy to respond to any questions you may have. For questions about this statement, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Dawn Locke (Assistant Director), Susanna Kuebler (Analyst-In-Charge), Jennifer Beddor, Eric Hauswirth, Richard Hung, Alexis Olson, and David Plocher. Key contributors for the previous work that this testimony is based on are listed in the previously issued product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Technology advancements have increased the overall accuracy of automated face recognition over the past few decades. This technology has helped law enforcement agencies identify criminals in their investigations. However, privacy advocates and members of the Congress remain concerned regarding the accuracy of the technology and the protection of privacy and individual civil liberties when technologies are used to identify people based on their biological and behavioral characteristics. This statement describes the extent to which the FBI ensures adherence to laws and policies related to privacy regarding its use of face recognition technology, and ensure its face recognition capabilities are sufficiently accurate. This statement is based on our May 2016 report regarding the FBI's use of face recognition technology and includes agency updates to our recommendations. To conduct that work, GAO reviewed federal privacy laws, FBI policies, operating manuals, and other documentation on its face recognition capability. GAO interviewed officials from the FBI and the Departments of Defense and State, which coordinate with the FBI on face recognition. GAO also interviewed two state agencies that partner with FBI to use multiple face recognition capabilities. In May 2016, GAO found that the Federal Bureau of Investigation (FBI) had not fully adhered to privacy laws and policies and had not taken sufficient action to help ensure accuracy of its face recognition technology. GAO made six recommendations to address these issues. As of March 2017, the Department of Justice (DOJ) and the FBI disagreed with three recommendations and had taken some actions to address the remainder, but had not fully implemented them. Privacy notices not timely. In May 2016, GAO recommended DOJ determine why privacy impact assessments (PIA) were not published in a timely manner (as required by law) and take corrective action. GAO made this recommendation because FBI did not update the Next Generation Identification-Interstate Photo System (NGI-IPS) PIA in a timely manner when the system underwent significant changes or publish a PIA for Facial Analysis, Comparison and Evaluation (FACE) Services before that unit began supporting FBI agents. DOJ disagreed on assessing the PIA process stating it established practices that protect privacy and civil liberties beyond the requirements of the law. GAO also recommended DOJ publish a system of records notice (SORN) and assess that process. DOJ agreed to publish a SORN, but did not agree there was a legal requirement to do so. GAO believes both recommendations are valid to keep the public informed on how personal information is being used and protected by DOJ components. GAO also recommended the FBI conduct audits to determine if users of NGI-IPS and biometric images specialists in the FBI's FACE Services unit are conducting face image searches in accordance with DOJ policy requirements. The FBI began conducting NGI-IPS user audits in 2017. Accuracy testing limited. In May 2016, GAO recommended the FBI conduct tests to verify that NGI-IPS is accurate for all allowable candidate list sizes to give more reasonable assurance that NGI-IPS provides leads that help enhance criminal investigations. GAO made this recommendation because FBI officials stated that they do not know, and have not tested, the detection rate for candidate list sizes smaller than 50, which users sometimes request from the FBI. GAO also recommended the FBI take steps to determine whether systems used by external partners are sufficiently accurate for FBI's use. By taking such steps, the FBI could better ensure the data from external partners do not unnecessarily include photos of innocent people as investigative leads. However, FBI disagreed with these two recommendations, stating the testing results satisfy requirements for providing investigative leads and that FBI does not have authority to set accuracy requirements for external systems. GAO continues to believe these recommendations are valid because the recommended testing and determination of accuracy of external systems would give the FBI more reasonable assurance that the systems provide investigative leads that help enhance, rather than hinder or overly burden, criminal investigation work. GAO also recommended the FBI conduct an annual operational review of NGI-IPS to determine if the accuracy of face recognition searches is meeting federal, state, and local law enforcement needs and take actions, as necessary. DOJ agreed and in 2017 FBI stated they implemented the recommendation by submitting a paper to solicit feedback from NGI-IPS users on whether face recognition searches are meeting their needs. However, GAO believes these actions do not fully meet the recommendation because they did not result in any formal response from users and did not constitute an operational review. GAO continues to recommend FBI conduct an operational review of NGI-IPS at least annually. In May 2016, DOJ and the FBI partially agreed with two recommendations and disagreed with another on privacy. FBI agreed with one and disagreed with two recommendations on accuracy. GAO continues to believe that the recommendations are valid. |
The National Guard Youth Challenge Program is a 17-month program that serves at-risk youth at 29 sites in 24 states and Puerto Rico. The purpose of the program is to improve the education, life skills, and employment potential of students by providing military-based training, supervised work experience, and knowledge in eight core program components. Students must be 16 to 18 years old, drug-free, unemployed, high school dropouts, and not in trouble with the law. NGB reports that more than 59,000 students have been graduated from the Challenge Program since it began as a pilot in 1993. The program was authorized by 32 U.S.C. §509 on a permanent basis in fiscal year 1998, at which time states were to begin paying a share of operating costs. Each Challenge Program site operates two residential classes per year, one of which begins in January and the other around July. A typical graduation goal is 100 students per class, or 200 per year, although several programs graduate more students. In 2004, for example, Illinois graduated almost 800 students in 2004, and Louisiana’s three sites combined graduated more than 950 students. The residential phase of the program runs 22 weeks and includes a 2-week Pre-Challenge phase. During Pre- Challenge, applicants are assessed for their ability and motivation to complete the remaining 20 weeks of the residential program. Those who successfully complete Pre-Challenge are then formally enrolled in the Challenge Program in numbers that equal each program’s graduation target plus normal program attrition rate. In the residential phase, students receive military-based training and supervised work experience. Additionally, each state develops a curriculum that incorporates the eight core components and the tasks, conditions, and standards that students must complete to demonstrate progress in those components. Each student must receive a score of at least 80 percent on each core component to be graduated from the program. During the 12-month post-residential phase, individuals who have successfully completed the residential phase are involved in placement activities, which include employment, education, volunteer activities, or any combination of the three or military service. The graduates work with adult mentors who were matched with them during the residential phase. These mentors provide guidance and support to the graduates and are required to contact the youths twice each month at a minimum. Program staff use the written, post-residential action plan that each student prepares and updates during the residential phase to monitor placement activities. Mentors also use this plan during their interactions with graduates. The Challenge Program reports youth placement activities at the end of the 12-month follow-up period. To further assess the long-term impact of the program, NGB has contracted with AOC Solutions to conduct a retrospective longitudinal study of program graduates as well as students who did not complete the program. Prior to 1998, the federal government, through DOD, completely funded the Challenge Program. In fiscal year 1998, Congress began requiring states to provide a minimum of 25 percent of their programs’ operating costs. The state cost share increased 5 percent each year until fiscal year 2001, when it reached the current funding requirement of 40 percent. Although some states had provided more funds than required in the past, program funding each year is now determined by the 40 percent share, which is based on $14,000 for each youth targeted for graduation. In addition to the federal and state funds used to operate the program sites, DOD also provides funds for NGB management expenses such as program evaluations, contractor-provided training, and travel for training and workshops. These NGB program management costs are not subject to the federal/state 60/40 cost share requirement. Each state submits a budget to NGB that is based on that state's target for number of graduates. Since the program’s inception, the funding provided by NGB has been based on a cost per student of $14,000. For example, if a state has a target of 100 students per class (200 per year) to graduate, the estimated program costs would be $2.8 million. The federal contribution, or 60 percent of the total, would be $1.68 million, while the state contribution would be $1.12 million. To receive federal funding, a state must certify that it has sufficient funds to provide its 40 percent share. State funds can be composed of cash, noncash supplies, services, or a combination of these sources. States are allowed to provide additional funding (over and above the 40 percent share) to the program from sources such as individual and corporate donations, additional moneys from the state general fund or other state revenue sources, or other federal funding. Some Challenge Program sites, for example, operate as alternative schools and are reimbursed by their state education agencies for portions of their program costs. Reserve Affairs, under the authority of the Under Secretary of Defense for Personnel and Readiness, is responsible for preparing the annual budget and reviewing state budgets and funding certifications. Reserve Affairs is to monitor program compliance with DOD policy, issue supplemental policy guidance, and submit the Challenge Program annual report to Congress. NGB provides day-to-day administration and oversight of the Challenge Program, issuing regulations, and submitting budgets and annual report drafts to Reserve Affairs. NGB has contracted with AOC Solutions to assist with the oversight of the Challenge Program. AOC Solutions performs the annual operational evaluations and the biennial resource management reviews. This contractor also pulls together the program information for the annual report and maintains and oversees the Data Management and Reporting System (DMARS), which is used to collect student data and report on individual and program activities. NGB has also contracted with Dare Mighty Things to provide training and technical assistance to Challenge Program staff. Finally, NGB has United States property and fiscal officers in each state who are responsible for receiving and accounting for all federal Challenge Program funds and property under control of the National Guard in that state. The property and fiscal officers are also responsible for ensuring that federal funds are properly obligated and expended. NGB enters into cooperative agreements with governors of states approved to participate in the Challenge Program. The cooperative agreements describe the responsibilities of the states and NGB as well as the funding, costs, and regulations for operating National Guard Youth Programs. The cooperative agreements also define the eight core components and provide guidance on how to run the residential and post- residential phases and other aspects of the Challenge Program. Each Challenge Program state is also required to submit state plans and budget estimates for their state. These state plans must include details on the state’s procedures and be consistent with overall program guidance provided by DOD. For example, state plans include information on application and selection procedures, staffing and staff training, and a detailed budget. According to NGB, Challenge Program expenditures and state participation have increased since the program began, and the program has achieved positive program performance outcomes over time. Since the program’s inception, total expenditures have increased from about $63 million to about $107 million per year. The number of states participating in the Challenge Program has also increased, and several states have expressed interest in adding a program or expanding existing ones. Challenge sites must account for their activities throughout the year, and NGB has reported positive performance outcomes over time. NGB reports that overall federal expenditures for the Challenge Program have increased over time, but states have also increased their expenditures since the program was permanently authorized in fiscal year 1998. Between fiscal years 1998 and 2004, total expenditures for the Challenge Program, including funds spent to cover the federal and state cost shares and federal management expenses, have increased from about $63 million to about $107 million. For fiscal year 2004, for example, NGB expenses included $61.6 million for the federal cost share and $5.8 million for NGB management costs, while states contributed approximately $40.5 million. In addition, in 2000 and 2001, the Challenge Program received $5 million and $7,483,500 respectively from the Department of Justice. Reserve Affairs stated that the primary use of these funds has been to start new Challenge Program sites. Since 2001, four programs were established using these funds and three programs’ operations were maintained in 2002. In total, approximately $5.97 million remain unspent in a nonexpiring account. Officials at Reserve Affairs and NGB told us that these funds remain unspent because no new Challenge Programs have started. According to these officials, new programs have not been established because state governments have not committed the required 40 percent match. (See fig. 1 for total program expenditures from fiscal year 1998 to 2004, broken down by federal and state cost share and NGB management expenses.) When the pilot program began in 1993, there were 10 Challenge Program sites in 10 states. The program has now grown to 29 sites in 24 states and Puerto Rico. In fiscal year 2005, Wyoming received funds to start up a program site. According to NGB, Wyoming will begin its first class in January 2006. In addition to those states currently operating Challenge Program sites, there are also nine states that have expressed interest in establishing new programs. For example, according to NGB officials, representatives from Washington and Indiana National Guard units have visited some existing program sites and are in the process of developing state programs. Other states are interested in expanding their programs to serve more youth at existing sites or to open new locations. On the other hand, for various reasons including difficulty meeting the state match requirement, lack of state support, and substandard facilities, four states have discontinued their Challenge Programs. Connecticut, a pilot program state, dropped its program in 1994 after two classes. Colorado discontinued its program after 1999, Missouri after 2002, and New York after 2003. Figure 2 describes the number of Challenge Program sites for each year since the program began. Appendix II identifies the individual states with Challenge Program sites. Student participation in and graduation from the Challenge Program have also increased over time. States are required to track the number of youth who have applied to the Challenge Program, enrolled in the third week of the program (after the 2-week Pre-Challenge phase), and were graduated from the residential phase. According to NGB, the target graduation rate for 2004 was 6,961; the actual number of enrollees was 8,920; and 7,003 students were graduated, or 79 percent of those enrolled, from the Challenge Program. Figure 3 shows the target numbers, the actual number of students who were enrolled in the residential phase at week 3, and the number that graduated from the program from 2000 through 2004. Although all Challenge Programs graduate two classes per year, the number of graduates per class varies. In addition, some states have multiple programs. For example, Louisiana has three Challenge Programs and, in 2004, graduated a total of 952 students. Figure 4 identifies those states currently participating in the Challenge Program by the number of graduates they reported for 2004. NGB has reported positive performance outcomes in academic performance, community service activities, and post-residential placements. Program performance information is tracked by each Challenge site and submitted to NGB. Each year, the Challenge Program reports on outcomes for the two classes completing the 22-week residential phase during that reporting year and for the two preceding classes as they complete their 1-year post-residential follow-up phase. The Challenge Program sites use the same automated system, DMARS, to collect information on students and report on their progress and activities. The information collected in DMARS is reviewed by the contractor through weekly and monthly reports and during random checks of source documents during operational evaluation site visits. Some residential phase outcomes of the Challenge Program, such as the number of graduates earning a general educational development (GED) credential or high school degree and changes in scores on standardized math and reading tests, are tied to the core component of Academic Excellence. For example, NGB reported that 70 percent of graduates in 2004 earned a GED. Figure 5 illustrates the outcomes of GED attainment for the past 5 reporting years. Students also take the Tests of Adult Basic Education, a series of tests that identify individual education levels in various academic subject areas. Each state program tests its students early in the residential phase and then toward the end of the 22-week period, and it reports the changes in test scores. In 2004, for example, NGB reported that graduating students improved 1.7 grade levels in reading and 1.8 grade levels in math during the residential phase. Another core component, Service to the Community, requires each student to perform a minimum of 40 hours of service to the community or conservation project activities. The number of community service hours performed by each student is tracked, and the total number of hours for each site is another outcome that the Challenge Program reports annually. For example, in 2004, NGB reported that Challenge Program students performed more than 590,000 hours of community service, such as maintaining historical cemeteries and parks and supporting organizations such as Special Olympics and Habitat for Humanity. Each month of the post-residential phase, each Challenge Program graduate, or that individual’s mentor, reports on the graduate’s post- residential activities. Following the 12-month post-residential phase, each Challenge Program site reports graduate placements in continuing education, the military, or the labor force. These placements are verified with schools, the military, and employers and are documented. Program representatives are not always able to contact all graduates for placement information and therefore placement data reflect only the students contacted, not all graduates. Education placements include returning to high school or going to a post-secondary or vocational-training institution, which students may be attending full- or part-time. Some Challenge Program graduates also enter the military, into either the active or the reserve forces. Post-residential employment placements can be full- or part-time, and they include those graduates who are self-employed. Graduates can have placements in more than one of these categories. For example, an individual might be attending school and working part-time. Challenge Program sites continue to update their placement records after the 12-month follow-up period when they come in contact with former students. During the longitudinal study, for example, the contractor has been able to update placement data based on information received from state program officials and graduates. Figure 6 shows post-residential placement trends for the past 3 reporting years. The total numbers of graduates placed in these 3 years are 2,407 in 2003; 3,698 in 2004; and 4,086 in 2005. Although Reserve Affairs and NGB have expressed concern about the current program funding level and have suggested increasing both the cost basis used to determine funding needs and the federal cost share, we found that neither Reserve Affairs nor NGB has performed analyses to support the need for such changes. Good budget practices, included in the Office of Management and Budget’s Federal Financial Accounting Standards, state that agencies should determine actual costs of their activities on a regular basis and that reliable cost information is crucial for effective management of government operations. Without better cost and financial information, DOD cannot justify future funding requests or a change in the cost-share ratio. Other than calculating how inflation has affected program costs, NGB has not analyzed data on actual program costs. Since 1993, NGB has used a cost of $14,000 per student as the basis for determining the amount of funds needed to cover program operating costs. In 2003, NGB calculated that if that amount were adjusted for inflation, it would be $18,000. The results of our survey of all Challenge Programs showed that in 2004, states actually spent between $9,300 and $31,031 per graduate with an average of $15,898 per graduate. In addition, our survey showed that, on average, states estimated that the program should be funded at approximately $16,900 per target graduate to cover all of the services in the cooperative agreements, although the estimates ranged from $14,000 to $31,800. Most Challenge Program officials also told us that increasing the cost per student funding level for the program without increasing the federal cost share would negatively impact their programs because their states would be unlikely to come up with the additional state match money. Because costs vary between states due to regional differences in salary levels, staff benefits, and facility costs, Reserve Affairs has asked NGB to determine a new funding formula for the program based on individual state needs. At the time of our review, NGB had not yet done this and Reserve Affairs has not given NGB a deadline for completion. In addition to expressing a desire to change the amount of funding per student, Reserve Affairs, NGB, and participating states have suggested that the cost-share ratio be changed from its current 60 percent federal share to a 75 percent federal share because they believe that the current 40 percent state share is sometimes difficult for states to meet; however, neither Reserve Affairs nor NGB has analyzed states’ financial situations or the impact of adjusting the federal and state cost share. Challenge Program officials told us that increasing the federal cost share of the program would be beneficial because it would enable states to expand their existing programs; give states more flexibility in funding their programs; and allow programs to restore to students some services that had been eliminated due to budgetary constraints. Although we did not analyze how changing the cost basis or the cost-share ratio would affect specific states, we prepared hypothetical examples for illustrative purposes. Table 1 shows how changing the cost basis and the cost-share ratio would affect federal and state required funding levels. In our survey, states reported varying views on whether they were experiencing difficulty in meeting their share of program costs. We did not verify the basis for their responses. Our survey showed that some states are able to provide funds above the required match; some states provide only the required match; and some states are unable to provide a match based on $14,000 per student and therefore fund the program at a lower level and receive less money from NGB. For example, California is able to provide additional support beyond the required match through additional money provided from the state general fund and funding from the program’s local school district. In 2004, California spent approximately $20,200 per graduate. Oregon, on the other hand, funds its program primarily through state education money. Due to recent state budget difficulties, Oregon cannot fund the program at $14,000 per student. In 2004, Oregon spent approximately $12,600 per graduate. According to the National Guard Bureau, all 29 programs are providing the services required by the cooperative agreements, and several states have added program enhancements such as field trips or vocational classes. However, some states reported that they reduced nonrequired services to stay within their budgets. For example, they implemented pay and hiring freezes; eliminated the student stipend; and eliminated program enrichment activities, such as field trips and vocational classes. Some states told us that additional funding, provided by a change in the cost- share ratio or an increase in the per student funding amount, would allow them to restore some of these services. Although NGB has several mechanisms in place for overseeing the Challenge Program, it lacks a complete oversight framework, making it difficult to measure the effectiveness of the program. A complete oversight framework, as suggested by the Government Performance and Results Act of 1993 (the Results Act) and Standards for Internal Control in the Federal Government, includes performance goals and measures against which to objectively measure performance as well as a mechanism for tracking findings of audits or reviews and responding to those findings. Currently, NGB does not require participating states to establish performance goals for individual programs and therefore does not have a firm basis for evaluating program outcomes and DOD’s return on investment. In accordance with the cooperative agreements, U.S. property and fiscal officers in each state are required to conduct full audits of state Challenge Programs at least every 3 years. However, these audits have not been conducted as required; and, when audits are conducted, copies of the results are not provided to NGB for review. Without regular audits and access to results, NGB cannot be assured that programs are using federal funds appropriately and that audit findings are addressed. NGB conducts several oversight activities for the Challenge Program. In accordance with the DOD Instruction and cooperative agreements governing the Challenge Program, NGB is responsible for the overall administration of the program, including program oversight. NGB uses both informal and formal mechanisms to oversee the program. On an informal basis, NGB frequently communicates with state program directors via e-mail and telephone calls. In addition, according to NGB, if a program director has a problem or an issue that he or she feels NGB needs to be involved with, he or she will initiate contact. Formal oversight of the program is conducted by NGB through AOC Solutions with yearly operational evaluations and biennial resource management reviews of all 29 Challenge Programs. The purpose of the operational evaluations is to assess the programs’ compliance with the cooperative agreements and the implementation of the eight core components. The resource management reviews focus on assessing programs’ financial accountability and reviewing resources including staffing levels and salaries; food service costs; and physical inventory of property. Both the operational evaluations and the resource management reviews have identified areas for improvement and, according to NGB, changes were made to the program. For example, the operational evaluation of one state program conducted in fiscal year 2004 reported that program staff was calling the students inappropriate names. According to NGB, a staff member was dismissed as a result of this finding. Another operational evaluation of a different program conducted in fiscal year 2005 found that over 90 percent of the community service hours accumulated by the students were for kitchen patrol. The cooperative agreements state that work in the dining facility may not be counted towards community service hours. According to NGB, this program has completely revamped its community service program and students now participate in such activities as visiting with residents of the local veterans’ home and caring for a historic cemetery. Until recently, NGB did not have a formal mechanism for tracking the findings of the reports conducted by AOC Solutions. During our review, we discussed with NGB the importance of keeping track of review findings in order to adequately respond to these findings, in accordance with Standards for Internal Control in the Federal Government. In response to our review, in October 2005, NGB provided new guidance to AOC Solutions regarding the operational evaluations and resource management reviews, which required, among other things, AOC Solutions to review findings from previous evaluation reports to determine whether corrective actions have been taken where warranted. In addition, officials at NGB told us that they currently monitor responses to audit and review findings informally with individual program directors. Although NGB requires state Challenge Programs to report on certain outcome measures, such as GED attainment and number of graduates, NGB does not require states to establish any performance goals in these areas to measure the effectiveness of the program. The establishment of performance goals is consistent with the principles of effective management as set forth in the Results Act and would allow NGB to better evaluate the overall performance of the program and to assess DOD’s return on its investment. Without clear and agreed upon performance goals, there is no objective yardstick against which to fully measure program performance and thereby assess DOD’s return on investment. Although it may not be reasonable to have the same performance goal for all state programs, it would be appropriate for each state program to negotiate a performance goal for defined performance areas such as increases in standardized test scores or physical fitness levels. Similar state programs, overseen by the Department of Labor, set individual negotiated levels of performance for specified core performance measures. These measures are used to provide information for systemwide reporting and evaluation for program improvement. For example, state Workforce Investment Act programs negotiate performance measures for youth ages 14 to 18 in three areas: attainment of basic skills; attainment of high school diplomas or their equivalents; and placement in education, the workforce, or the military. For the area of diploma attainment, the goals range from 42.8 percent of participants in Louisiana to 68 percent of participants in New Hampshire. Currently, Challenge Program states are required to submit state plans annually. These state plans are required to contain long-term and annual performance goals and are to be updated annually. However, NGB has not provided guidance on specific performance areas where states should focus their goals; therefore, states may not have goals in the same performance areas, making it difficult for NGB to compare performance across programs. For example, California’s state plan contains a goal to acquire additional sources of funding through grants and charitable contributions. Oregon’s state plan, on the other hand, contains a goal stating that 80 percent of graduates from the residential portion of the program will be placed in education, the military, or employment but does not contain any goal related to acquiring additional sources of funding. In addition, states are not currently held accountable to the goals that they do set since the evaluation process does not measure the states’ performance against their goals. NGB told us that beginning in January 2006, states would be held accountable to the goals outlined in their state plans. Although the cooperative agreements governing the Challenge Program currently require U.S. property and fiscal officers to perform full audits of state Challenge Programs at least every 3 years, and prior to January 2005, the cooperative agreements required full audits every year, these audits have not been conducted as required. For example, according to NGB, out of a required 29 audits, only 14 were conducted in 2003, and only 7 audits were conducted in 2004. According to property and fiscal officers that we spoke with, audits were not conducted due to the lower priority placed on these audits compared to other audits that needed to be conducted within the state and a lack of staff to conduct the audits. Because the property and fiscal officers are responsible for ensuring that federal dollars are appropriately spent, if these audits are not conducted, it may be difficult to ensure that federal interests are adequately protected. When the property and fiscal officers do conduct audits of the Challenge Program, they are not currently providing copies of the audit results to NGB because, according to the Chief of Property and Fiscal Affairs at NGB, there is no specific requirement to do so. Standards for Internal Control in the Federal Government states that agencies need to ensure that the findings of audits and reviews are promptly resolved. If NGB does not review these audits, it cannot ensure that audit findings are resolved or identify trends across programs that may require action at a programwide level. According to officials at NGB, the audits remain internal to the state and the property and fiscal officer works directly with the state Challenge Program site to resolve any issues. In addition, if any audit findings require action from NGB, the state property and fiscal officer will contact NGB and ask for assistance. NGB needs to be aware of all audit findings, including those reported by the property and fiscal officers to effectively manage the Challenge Program. We reviewed the most recent property and fiscal officer audits for each Challenge Program participating state and found that they did identify areas that needed improvement. For example, an audit conducted of one program in fiscal year 2004 found unspent program dollars totaling approximately $180,000 that needed to be returned to the NGB. According to a follow-up report by the property and fiscal officer, this money was returned to NGB. Another audit conducted during fiscal year 2003 found that oversight of budget expenditures for one program was not adequate to ensure accurate, timely, and complete accounting of expenditures and recommended that key management controls over the program be identified, documented, published and tested. A follow-up audit of the program found that this recommendation was implemented. Reserve Affairs has not adopted a formal strategy for pursuing nondefense funding, while some participating states have obtained alternative funding support for their programs as a result of their own efforts. Because Reserve Affairs has not made a formal business case to request funds from nondefense agencies, these agencies are unable to determine whether or not they are able to fund the Challenge Program. As a result, Reserve Affairs is potentially missing out on additional sources of funding that could enhance the program. Some participating states have obtained alternative funding support to enhance or maintain services provided to Challenge Program students. Reserve Affairs has not made a formal business case to request funds from nondefense agencies and therefore these agencies are unable to determine whether or not they are able to fund the Challenge Program. Although the authorizing legislation for the Challenge Program allows the Secretary of Defense to use nondefense funding sources in support of the program, Reserve Affairs has not adopted a formal strategy for pursuing nondefense funding. Rather, Reserve Affairs has primarily adopted informal strategies to contact agencies outside DOD to inform them about the Challenge Program and seek opportunities for partnerships. For example, Reserve Affairs officials have sent information via e-mail to officials at federal agencies with programs targeting at-risk youth (i.e. Department of Justice (DOJ), Department of Labor (DOL), and Department of Education (ED)) to inform them about the Challenge Program. These e-mails did not include specific requests for program funding. In addition, in July 2003, Reserve Affairs presented senior DOL officials with a proposal for forming an interagency partnership with the Challenge Program, but did not ask for a commitment of funding from DOL in a specified amount. Additionally, officials from these agencies have been invited to events sponsored by the National Guard Youth Foundation in support of the Challenge Program. Lastly, Reserve Affairs officials told us that they participate or have participated on a number of interagency councils and working groups that represent at-risk youth such as the President’s Crime Prevention Council, the White House Federal Interagency Working Group on Service, the Corporation for National and Community Service, the National Civilian Community Corps, the Math Science Initiative, and the Juvenile Justice and Delinquency Prevention Council. Officials from DOJ, DOL, and ED noted that their agencies have general authority to provide funds to other programs if the transferred funds were used to support a program consistent with their agency’s interests. It is not unprecedented for an agency to transfer funds to the Challenge Program. In fiscal years 2000 and 2001, DOD and DOJ signed an interagency agreement under which DOJ agreed to provide $5 million in 2000 and $7,483,500 in 2001 to the Challenge Program. As DOJ concluded in this situation, and as officials at DOJ, DOL and ED stated, the transfer of funds to another agency must be consistent with the purpose and legal requirements of the program from which funds would be transferred. Moreover, these officials stated that sufficient funding must be available in the program from which the funds would be transferred, and making such a transfer must be in the interest of the agency. Officials at DOJ, DOL, and ED stated that Reserve Affairs needed to present more specific information to them before they could make a determination as to whether funds could be provided to the Challenge Program. These officials stated that an executive branch agency that wishes to make a request for funding from another executive branch agency could make such a request in any number of ways, either formally or informally. At a minimum, they noted, any such request should contain the amount of funding sought and a sufficiently detailed description of the program to allow the agency receiving the request to determine whether it would be an appropriate use of funds. At the time of our review, none of the agency officials we met with were aware of any specific request from Reserve Affairs concerning this matter. Until Reserve Affairs makes a more formal request for funding, other agencies will be unable to determine if they are able to provide funds for the Challenge Program. Under cooperative agreements with NGB, states are not required to seek funding support beyond the required federal and state contributions; however, some states have made efforts to obtain alternative funding support for their programs to enhance or maintain the services provided to Challenge Program students. Under authorizing legislation for the Challenge Program, states may accept, use, and dispose of gifts or donations of money, other property, or services for the Challenge Program. Reserve Affairs and NGB officials said that they encouraged states to seek out additional funding sources for their programs if they want to enhance the services provided to program students. To assist state programs in their efforts to obtain that additional support, NGB, through its contractor Dare Mighty Things, shares information among state programs on strategies for successfully organizing a 501 (c) (3) corporation, developing a fundraising policy that focuses on the long- term vision of the Challenge Program, accessing and sharing a grant writer with other programs, applying for National School Lunch Program funds, and educating state legislators to secure funding support. By providing information and examples of programs that are currently implementing these strategies, other state Challenge Programs have the information available to them on how to obtain additional funding support for their programs. In our review of the program, we found that some states had identified strategies for soliciting additional funding support from nonprofit organizations. For example, in the annual plans submitted to NGB, some states developed specific strategies for obtaining additional funding support such as setting a goal for the number of grants they would apply for and establishing a 501 (c) (3) corporation to raise funds on behalf of the program. We saw further evidence of states’ efforts to obtain additional funding support through our site visits and in our survey of the 29 Challenge Program sites. For example, in one state we visited, the Challenge site was also a charter school, which allowed the program to receive additional funding from the local school district. Moreover, in four states we visited, the state legislatures provided additional funding beyond the required state match of 40 percent to pay for additional staffing costs or facilities. According to our survey of all Challenge Programs, 28 out of 29 programs identified receiving some type of funding support beyond the required federal and state contributions to pay for program expenditures incurred during 1999 through 2004. These programs identified receiving additional funding from the states’ general fund beyond the required state match as well as support from other state agencies such as the state Department of Education. Additionally, the programs we surveyed relied on funding assistance from other federal agencies such as the Department of Agriculture, which provides funding under the National School Lunch Program; nonprofit organization grants; the programs’ private 501 (c) (3) corporation; and donations from private individuals. Results from our site visits and national survey also showed that state Challenge Programs rely on donations of goods and services to support the program. For example, states we visited relied on donations from private citizens, corporations, schools, and the states’ National Guard units for such items as computers and software, exercise equipment, books, uniforms, and shoes. Some programs also received support from the surrounding community in the form of donated services such as transportation assistance and medical services from local doctors and nurses. Our survey further showed that 21 out of 29 programs received some form of donated goods and services. Program officials we met stated that donations of goods and services were vital to programs because many times they do not have funds available to acquire equipment or services for the program. Donations of goods and services can enhance the basic program beyond the military-based training and academics required under the cooperative agreements adding little or no cost to the program. Despite the success that many states had in obtaining additional funding support and donations of goods and services for their programs, some states reported obstacles in securing supplemental funding. For example, the cooperative agreements governing the Challenge Program do not allow states to use Challenge Program funds to hire full-time grant writers. Lastly, some programs we visited expressed concern that if they received funding support outside the federal and state required contributions, then their state governments might reduce their allocations. Although Reserve Affairs, NGB, and participating states have suggested the current cost basis of $14,000 per student is not sufficient to sustain the program and that the cost-share ratio should be changed, Reserve Affairs and NGB have done little analysis to show what the actual costs of the program are and how changing the cost-share ratio for the program would impact participating states. Without better cost and financial information, DOD cannot justify future funding requests or a change in the cost-share ratio. Until NGB establishes clear and agreed upon performance goals, there is no objective yardstick against which to fully measure program performance. Without these performance goals in place, NGB does not have a firm basis for evaluating program outcomes and DOD’s return on investment. Although property and fiscal officers are required to conduct full audits of the Challenge Program at least every 3 years, these audits have not been conducted due to competing priorities at the state level and a lack of staff. Without these audits, it may be difficult to ensure that federal interests are adequately protected. Also, because NGB does not receive copies of audits conducted by property and fiscal officers, it cannot know what the findings of these audits were and if changes were made to the programs based on these findings. Until Reserve Affairs makes a more formal request for funding, other agencies will be unable to determine if they can provide funds for the Challenge Program. Without a more formal process that outlines the amount of funding needed and a detailed description of the Challenge Program and its specific funding needs, Reserve Affairs is potentially missing out on additional sources of funding that could enhance the program. To improve the management and oversight of the National Guard Youth Challenge Program, we recommend that the Secretary of Defense, direct the Under Secretary of Defense for Personnel and Readiness, in consultation with the Assistant Secretary of Defense for Reserve Affairs and the Chief of the National Guard Bureau, to take the following three actions: Determine the actual costs of the Challenge Program, including states’ ability to fund their share of the program, and use this information, as appropriate, to support funding requests or a request to change the cost- share ratio. Establish performance goals to measure the effectiveness of the Challenge Program. Direct U.S. property and fiscal officers to conduct audits as required and require that copies of audit results are provided to the appropriate office at the National Guard Bureau in order to ensure that the results of audits are promptly reviewed and resolved. To strengthen efforts at obtaining alternative funding in support of the National Guard Youth Challenge Program, we recommend that the Secretary of Defense direct the Undersecretary of Defense for Personnel and Readiness, in consultation with the Assistant Secretary of Defense for Reserve Affairs, to develop more formal strategies for requesting alternative funding support for the Challenge Program. Such strategies may include submitting requests for funding that include the amount of funding requested and a sufficiently detailed description of the proposed program to allow potential providers of funds, such as nondefense agencies, to determine whether it would be an appropriate use of their funds. In written comments on a draft of this report, DOD concurred with our recommendations. In its overall comments, DOD asserts that our report reaffirms that the program met its congressional mandate to improve the life skills and employment potential of participants by providing military- based training and supervised work experience. Further, DOD’s comments state that our report validates the core program components of assisting participants to receive a high school diploma or its equivalent; developing leadership skills; promoting fellowship and community service; developing life coping and job skills; and improving physical fitness, health and hygiene. DOD also states that it appreciates GAO’s confirmation that it is properly executing the program. We do not agree with DOD’s characterization of our report. Whereas our report provides background data on program performance and mentions that DOD has reported positive outcomes from the Challenge Program, we do not assess whether or not DOD met its congressional mandate; we do not validate the program’s core components, nor do we confirm that DOD is properly executing the program, as DOD’s comments suggest. Rather our objectives were to examine: (1) historical trends of the National Guard Youth Challenge Program, including program expenditures, participation, and performance; (2) the extent to which Reserve Affairs and the NGB have determined actual program costs and the need to adjust the federal and state cost-share; and (3) the extent to which the NGB has provided oversight of the program. We also determined the extent to which Reserve Affairs and participating states have made an effort to obtain alternative funding support for the Challenge Program. In response to our recommendation to determine the actual costs of the Challenge Program, including the ability of states to fund their share, DOD concurred, but claims that the matching fund requirement makes the program vulnerable to state budget cuts. DOD also contends that until recently, sluggish state revenues made the cost share requirement more burdensome and inhibited the ability of both the NGB and the states to focus on determining the actual costs of the Challenge Program. DOD claims that the budget shortfalls required the states to reduce and/or eliminate program services and that, with state revenues slowly closing their budget gaps, restoring funding for needed program services is being considered. DOD also notes that it is currently working on a new funding formula for the program. Although DOD’s comments suggest that adequate funding was not available for needed services, through our survey and other work, we found that all 29 programs were able to provide the services required by the cooperative agreements, and several states reported they added program enhancements such as field trips or vocational classes. However, some states reported that they reduced nonrequired services to stay within their budgets. Furthermore, the basis for DOD’s comment regarding the effect of the matching fund requirement and the level of state revenues on the program is unclear. As we reported, neither Reserve Affairs nor NGB has compiled or analyzed data on actual program costs, states’ financial situations, or the impact of adjusting the federal and state cost share. We continue to believe that any future funding request, or a request to change the cost share requirement, should be based on an analysis of the actual costs of the program. DOD’s comments are printed in their entirety in appendix III. We are sending copies of this report to the Secretary of Defense and interested congressional committees. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To examine historical trends of the National Guard Youth Challenge Program, including program expenditures, participation, and performance, we interviewed officials from the Office of the Assistant Secretary of Defense for Reserve Affairs; the National Guard Bureau (NGB); and the contractor that monitors and evaluates state programs, AOC Solutions. We reviewed other documentation provided by Reserve Affairs and NGB, such as program funding summaries. We also reviewed the Challenge Program’s annual reports submitted to Congress from 1994 through the present, excluding 1997, which provided us background information on the program. For data on participation and performance outcomes, including general education development credential attainment, community service hours, and post-residential placements, we relied on the Data Management and Reporting System (DMARS) that is used by NGB to collect participant information and track individual and program activities. DMARS was implemented for the 2003 reporting year, and the contractor has performed procedures to clean up the data as far back as 2000. We conducted data reliability tests on the program’s annual reports and data management system to conclude that the data are sufficiently reliable for our purposes. We did not compare the costs or outcomes of this program to other similar youth programs currently funded by the federal government since that was not within the scope of this engagement. To assess the extent to which Reserve Affairs and NGB determined actual program costs and analyzed states’ ability to fund their share of the program, we interviewed officials at Reserve Affairs and NGB as well as officials at state programs. We reviewed and analyzed internal budget documents prepared by state programs and submitted to NGB annually as well as documents prepared by individual program directors that discussed the funding situation in their state. We also reviewed and analyzed good budget practices as described in federal financial accounting standards. Using a semi-structured questionnaire, we interviewed eight state Challenge Program directors and their budget officers in seven states and discussed sources of funding for their programs, actual costs of operating their programs, and the types of services that are provided to program participants given the available funding. States selected for site visits or telephone interviews are shown in table 2. There were a number of factors which affected the judgmental sampling of these state programs. States were chosen because of their length of time in the program, size, geographic location, experience of the program’s director, and whether or not there were multiple programs in the state. Additionally, in July 2005, we surveyed all 29 participating Challenge Programs and asked them about the actual costs of their programs and their assessment of their states’ ability to fund the programs. This survey had a response rate of 100 percent. To assess the extent to which NGB has provided oversight of the Challenge Program, we interviewed officials at Reserve Affairs and NGB as well as program officials at the state level. To determine the oversight responsibilities of federal and state officials involved in the program, we reviewed and analyzed pertinent laws, the Department of Defense’s (DOD) Instruction governing the program, and the NGB Master Cooperative Agreement, which defines the terms and conditions of the program in each state. We also interviewed the independent contractor hired by NGB to conduct yearly and biennial on-site program reviews, AOC Solutions. We obtained access to its data management system and analyzed the program evaluations and resource management reviews it completed on behalf of NGB since 2003. We also reviewed and analyzed individual state program’s annual plans and NGB’s strategic plan to identify the types of goals set by the Challenge program. We consulted previous GAO work regarding performance measurement and evaluations, identified best practices for establishing and measuring performance goals, and reviewed Standards for Internal Control in the Federal Government and the Government Performance and Results Act of 1993. We interviewed representatives of the U.S. Property and Fiscal Officer’s Office in six states as well as the NGB Chief of Property and Fiscal Affairs to determine the time frame for completing audits and mechanisms for reporting, tracking, and resolving issues that arise out of the PFO audits. To determine the extent to which Reserve Affairs and participating states have obtained alternative funding support for the program, we reviewed relevant laws, policies, and reports to determine the relevant authorities for receiving and transferring funds between federal agencies. We also interviewed federal officials at Reserve Affairs and the Departments of Labor, Education, and Justice to determine the extent to which these agencies discussed the possibility of transferring funds to DOD in support of the Challenge program. To determine states’ efforts at obtaining alternative funding support for their program’s operations, we conducted a survey of all 29 participating programs to collect information about their sources of funding for the program, how funding is distributed across different program operational functions, the sources and types of donated goods and services, and descriptions of strategies programs use to obtain alternative funding support. Using a semi-structured questionnaire, we interviewed eight state Challenge Program directors and members of their staff in seven states and discussed sources of funding for their programs, the strategies these specific programs use for obtaining alternative funding support and donations, and the difficulties these states face in seeking out other funding. We provided a draft of this report to officials at DOD for their review and incorporated their comments where appropriate. We conducted our work from January 2005 to October 2005 in accordance with generally accepted government auditing standards. In addition to the individual named above, Harold Reich, Karyn Angulo, Leslie Bharadwaja, Susan Ditto, K. Nicole Harms, Wilfred Holloway, Jessica Kaczmarek, Stanley Kostyla, Julia Matta, Renee McElveen, and John Van Schaik made key contributions to this report. | The fiscal year 1993 National Defense Authorization Act established the National Guard Youth Challenge Program as a pilot program to evaluate the effectiveness of providing military based training to improve the life skills of high school dropouts. The Assistant Secretary of Defense for Reserve Affairs, under the authority of the Under Secretary of Defense for Personnel and Readiness, is responsible for overall policy for the program. The National Guard Bureau (NGB) provides direct management and oversight. In 1998, Congress permanently authorized the program and began decreasing the federal cost share until it reached its current level of 60 percent in 2001. Conference Report 108-767 directed GAO to review the program. Specifically, GAO reviewed (1) historical trends of the program; (2) the extent of analyses performed to determine program costs and the need to adjust the federal and state cost share; and (3) NGB oversight of the program. GAO is also providing information on Reserve Affairs' and states' efforts to obtain funding from alternative sources. Between fiscal years 1998 and 2004, total expenditures for the Challenge Program, including funds spent to cover the federal and state cost shares and federal management expenses, have increased from about $63 million to $107 million. During this same period, participation in the program has grown from 10 sites in 10 states to 29 sites in 24 states and Puerto Rico. Since the program's inception, NGB has reported positive performance outcomes in academic performance, community service activities, and post-residential placements. For example, in 2004, NGB reported graduating 7,003 students, or 79 percent of those enrolled, with 70 percent of those graduates earning a high school equivalent diploma. While Reserve Affairs and NGB have expressed concern about the current program funding level and have suggested increasing both the cost basis used to determine funding needs and the federal cost share, neither has performed analyses to support the need for such changes. Federal financial standards state that reliable cost information is crucial for effective management of government operations. Since 1993, NGB has used $14,000 per student as the basis for determining the amount of funds needed to cover program operating costs, and applied the federal-state cost share to this amount. To keep pace with inflation, NGB has suggested increasing the per student cost to $18,000. Reserve Affairs has reported some states are having difficulty meeting their share and, in 2004, recommended the federal share be increased from 60 percent to 75 percent. However, neither Reserve Affairs nor NGB has compiled or analyzed data on actual program costs, states' financial situations, or the impact of adjusting the federal and state cost-share. Without better cost and financial information, the Department of Defense (DOD) cannot justify future funding requests or a change in the cost-share ratio. Although NGB uses various oversight mechanisms, it lacks a complete oversight framework, making it difficult to measure program effectiveness and to adequately address audit and review findings. Also, some audits have not been performed as required. The Government Performance and Results Act suggests a complete oversight framework including goals and measures against which to objectively evaluate performance. While NGB requires states to report certain performance outcomes, it does not require states to establish performance goals in these areas, and therefore does not have a firm basis for evaluating program outcomes and DOD's return on investment. Existing agreements require state programs to be audited at least every three years. However these audits have not been conducted as required and no provisions exist for submitting audit results to NGB. Without regular audits and access to results, NGB cannot be assured that programs are using federal funds appropriately and that audit findings are addressed. |
ONDCP was established by the Anti-Drug Abuse Act of 1988 to, among other things, enhance national drug control planning and coordination and represent the drug policies of the executive branch before Congress. In this role, the office is responsible for (1) developing a national drug control policy, (2) developing and applying specific goals and performance measurements to evaluate the effectiveness of national drug control policy and National Drug Control Program agencies’ programs, (3) overseeing and coordinating the implementation of the national drug control policy, and (4) assessing and certifying the adequacy of the budget for National Drug Control Programs. The 2010 Strategy is the inaugural strategy guiding drug policy under President Obama’s administration. For the 2010 Strategy, ONDCP changed its approach from publishing a 1-year Strategy to publishing a 5- year Strategy, which ONDCP is to update annually. The annual updates are to provide an implementation progress report as well as an opportunity to make adjustments to reflect policy changes. ONDCP established two overarching policy goals in the 2010 Strategy for (1) curtailing illicit drug consumption and (2) improving public health by reducing the consequences of drug abuse, and seven subgoals under them that delineate specific quantitative outcomes to be achieved by 2015, such as reducing drug-induced deaths by 15 percent. To support the achievement of these two policy goals and seven subgoals (collectively referred to as goals), the Strategy and annual updates include seven strategic objectives and multiple action items under each objective, with lead and participating agencies designated for each action item. ONDCP reported that about $25.2 billion was provided for drug control programs in fiscal year 2012. Of this, $10.1 billion, or 40 percent, was allocated to drug abuse prevention and treatment programs. The 15 federal departments, agencies, and components (collectively referred to as agencies) we selected for our review of drug abuse prevention and treatment programs collectively allocated about $4.5 billion in fiscal year 2012 to such programs. These agencies included the Substance Abuse and Mental Health Services Administration, Department of Education, Department of Housing and Urban Development, National Highway Traffic Safety Administration, Office of Justice Programs, and Bureau of Prisons, among others. The HIDTA program was established in 1988 and is a federally funded program administered by ONDCP that brings together federal, state, and local law enforcement agencies into task forces that conduct investigations of drug-trafficking organizations in designated areas. The HIDTA program is focused on counternarcotics. However, HIDTA program resources may also be used for other purposes such as to assist law enforcement agencies in investigations and activities related to terrorism and the prevention of terrorism. There are 28 HIDTAs across the United States, and each has an Investigative Support Center that serves to support the HIDTA program by providing analytical case support, promoting officer safety, preparing and issuing drug threat assessments, and developing and disseminating intelligence products. The HIDTA and RISS programs operate three separate systems that have (1) event deconfliction functions to determine when multiple federal, state, or local law enforcement agencies are conducting enforcement actions—such as raids, undercover operations, or surveillances—in proximity to one another during a specified time period, or (2) target deconfliction functions, which determine if multiple law enforcement agencies are investigating, for example, the same person, vehicle, weapon, or business. Individual HIDTAs have used the Secure Automated Fast Event Tracking Network (SAFETNet) system, which has had event deconfliction functions, among other functions, since 2001 to help ensure officer safety. In 2009, the HIDTA program introduced deconfliction features into the Case Explorer system that differed from SAFETNet by providing a free service that is tied to its performance management process. In 2009, RISS developed RISSafe to provide event deconfliction to its members and those not being served by another system. Pursuant to federal legislation enacted in 2010, we conduct routine investigations to identify programs, agencies, offices, and initiatives with duplicative goals and activities within departments and government-wide and report annually to Congress. In March 2011 and February 2012, we issued our first two annual reports to Congress in response to this requirement. On the basis of the framework established in these reports, we used the following definitions for assessing drug abuse prevention and treatment programs and field-based information sharing entities: Fragmentation occurs when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national interest. Overlap occurs when fragmented agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. In our March 2013 report, we found that ONDCP and other federal agencies had not made progress toward achieving most of the goals articulated in the 2010 National Drug Control Strategy. In the Strategy, ONDCP established seven goals related to reducing illicit drug use and its consequences by 2015. As we reported in March 2013, our analysis showed that of the five goals for which primary data on results were available, one showed progress and four showed either no change or movement away from the 2015 goals. For example, no progress had been made on the goal to reduce drug use among 12- to 17-year-olds by 15 percent. According to the data source for this measure—the National Survey on Drug Use and Health—this was primarily due to an increase in the rate of reported marijuana use, offset by decreases in the rates of reported use of other drugs. Table 1 shows 2010 Strategy goals and progress toward meeting them, as of March 2013. We reported in March 2013 that, according to ONDCP officials, a variety of factors could affect achievement of these goals, such as worsening economic conditions, changing demographics, or changing social or political environments; the passage of state laws that decriminalize marijuana use or allow its use for medical purposes; failure to obtain sufficient resources to address drug control problems; insufficient commitment from agency partners; and the need for new action items that include initiatives or activities beyond those that are under way or planned. We reported that ONDCP officials stated that the office’s new Performance Reporting System (PRS) is to provide more specific information about where the Strategy is on or off track and prompt diagnostic reviews to identify causal factors contributing to any problems identified, as discussed below. ONDCP released the 2015 Strategy on November 17, 2015, and it is an annual update to the 2010 Strategy. Since our March 2013 report, ONDCP has begun reporting progress toward two goals where data were not initially available. According to data available to date, the Strategy shows progress toward achieving one goal, no progress on three goals, and mixed progress on the remaining three goals. Overall, none of the goals in the 2010 Strategy have been fully achieved. Table 2 shows the 2010 Strategy goals and ONDCP’s reported progress toward meeting them. In March 2013, we reported that ONDCP established the PRS to monitor and assess progress toward meeting Strategy goals and objectives and issued a report (the PRS report) describing the system with the 2012 Strategy update. The PRS includes interagency performance measures and targets under each Strategy objective. For example, 1 of the 6 performance measures under the objective to strengthen efforts to prevent drug use in our communities is the average age of initiation for all illicit drug use, which has a 2009 baseline of 17.6 years of age and a 2015 target of 19.5 years of age. According to the PRS report, system information is to be used to inform budget formulation and resource allocation, Strategy implementation, and policy making, among other things. As part of our review, we assessed PRS measures and found them to be generally consistent with attributes of effective performance management identified in our prior work as important for ensuring performance measures demonstrate results and are useful for decision making. For example, we found that the PRS measures are clearly stated, with descriptions included in the 2012 PRS report, and all 26 of them have or are to have measurable numerical targets. In addition, the measures were developed with input from stakeholders through an interagency working group process, which included participation by the Departments of Education, Justice, and Health and Human Services, among others. The groups assessed the validity of the measures and evaluated data sources, among other things. We reported in March 2013 that, according to ONDCP officials, information collected through the PRS is to provide valuable insights to help identify where the Strategy is on track and when further problem solving and evaluation are needed. At that time, the system was still in its early stages and ONDCP had not issued its first report on the results of the system’s performance measures. Accordingly, operational information was not available to evaluate the system’s results. ONDCP officials stated that when results are determined to not be on track to meet 2015 targets, the PRS is to serve as a trigger for an interagency review of potential causes of performance gaps and options for improvement. We reported that, according to these officials, ONDCP plans to assess the effectiveness of the PRS more comprehensively to determine how well it is working and whether any adjustments need to be made after the system has been operational for a longer period of time. We also reported that these plans should help increase accountability for improving results and enhance the system’s effectiveness as a mechanism to monitor progress toward Strategy goals and objectives and assess where further action is needed to improve progress. ONDCP released its annual PRS report on November 17, 2015. The 2015 report assesses progress on the Strategy’s goals, as well as performance measures related to each of the Strategy’s objectives, and discusses future actions required to achieve these goals and measures. ONDCP has assessed the extent of overlap and potential for duplication across federal drug abuse prevention and treatment programs and identified opportunities for increased coordination, as we recommended in March 2013. Specifically, we reported that drug abuse prevention and treatment programs were fragmented across 15 federal agencies that funded or administered 76 programs in fiscal year 2011, and identified overlap in 59 of these programs because they can provide or fund at least one drug abuse prevention or treatment service that at least 1 other program can provide or fund, either to similar population groups or to reach similar program goals. For example, 6 programs reported that they can provide or fund drug abuse prevention services for students and youth in order to support program goals of preventing drug use and abuse among young people. All 6 of these programs also reported that they can provide or fund services to conduct outreach and educate youth on drug use. As part of our review, we also conducted a more in-depth analysis in two selected areas where we identified overlap—programs for youth and programs for offenders. We reported that agency officials who administer programs in these two areas took various efforts to coordinate overlapping programs or services, which can serve to minimize the risk of duplication. For example, using an interagency agreement, the Department of Education jointly administers the Safe Schools/Healthy Students program with the Departments of Justice and Health and Human Services to provide complementary educational, mental health, and law enforcement services to prevent youth violence and drug use. We found in March 2013 that although the agencies’ coordination efforts in these two areas were consistent with practices that we had previously reported federal agencies use to implement collaborative efforts, not all of the programs surveyed were involved in coordination efforts with other federal agencies. Specifically, officials from 29 of the 76 (about 40 percent) programs surveyed reported no coordination with other federal agencies on drug abuse prevention or treatment activities in the year prior to our survey. Furthermore, we reported that although ONDCP coordinates efforts to develop and implement the Strategy and National Drug Control Program Budget, it had not systematically assessed drug abuse prevention and treatment programs to examine the extent of overlap and potential for duplication and identify opportunities for greater coordination. As a result, we recommended that ONDCP conduct such an assessment. ONDCP concurred with our recommendation and has implemented it. In July 2014, ONDCP published an assessment of drug abuse prevention and treatment programs in its fiscal year 2015 Budget and Performance Summary, which was released with the annual Strategy. ONDCP reported that it conducted this assessment by (1) preparing an inventory of federal agency drug abuse prevention and treatment program activities, starting with those in our report; (2) mapping the beneficiaries and services provided by each program activity to determine the extent of overlap; and (3) reviewing overlapping programs to assess the level of coordination activities, among other steps. The assessment found that these programs generally serve distinct beneficiaries in distinct settings, which helps prevent overlap and duplication. In the cases where overlap could occur, ONDCP’s review of grant awards made under the programs determined that duplication did not occur over a 3-year period ending in 2013. Further, according to the assessment, the agencies managing overlapping programs have coordinated through interagency collaboration, coordinated grant applications, and other activities. However, ONDCP found that programs that provide drug abuse prevention and treatment services to support efforts to address homelessness would benefit from greater coordination. In August 2014, ONDCP stated that it is working to ensure additional coordination in this area by, for example, providing guidance to relevant agencies during the office’s budget and oversight review process on improving coordination of grant programs that offer similar treatment and recovery support services to homeless clients. ONDCP’s assessment states that the office will continue to monitor the programs that overlap, as well as any new federal programs that are added to prevent and treat substance use disorders. According to the assessment, this monitoring is to include requiring regular reporting from the agencies as a part of interagency drug abuse prevention and treatment working group meetings and working with the agencies to ensure greater coordination and opportunities to consolidate programs as a part of the annual budget process. As a result of ONDCP’s actions in response to our recommendation, the office will be better positioned to help ensure that federal agencies undertaking similar drug abuse prevention and treatment efforts better leverage and more efficiently use limited resources. Our April 2013 report found that ONDCP, DHS, and DOJ did not hold HIDTAs or the four other types of field-based information sharing entities we reviewed—Joint Terrorism Task Forces, Federal Bureau of Investigation Field Intelligence Groups, RISS centers, and state and major urban area fusion centers—accountable for coordinating with one another or assessing opportunities for further enhancing coordination to help reduce the potential for overlap and achieve efficiencies. Specifically, we found that while the five types of field-based entities have distinct missions, roles, and responsibilities, their activities can overlap. For example, across the eight urban areas that we reviewed, we identified 91 instances of overlap in some analytical activities—such as producing intelligence reports—and 32 instances of overlap in investigative support activities, such as identifying links between criminal organizations. These entities conducted similar activities within the same mission area, such as counterterrorism, and for similar customers, such as federal or state agencies. Across the eight urban areas, 34 of the 37 field-based entities we reviewed conducted an analytical or investigative support activity that overlapped with that of another entity. We reported that this can lead to benefits, such as the corroboration of information, but may also burden customers with redundant information. In our April 2013 report, ONDCP, DHS, and DOJ officials acknowledged that field-based entities working together and sharing information are important, but they do not hold their entities accountable for such coordination. For example, HIDTA Investigative Support Centers have a performance measurement program that holds the centers accountable for referring leads to other HIDTAs and other agencies, but the program does not include measures about the HIDTA’s ability to coordinate with other field-based entities. Further, ONDCP, DHS, and DOJ officials stated that they ultimately rely on the leadership of their respective field-based entities to ensure that successful coordination is occurring because the leaders in these entities are most familiar with the other stakeholders and issues in their areas, and are best suited to develop working relationships with one another. Officials at 22 of the 37 entities we reviewed agreed that successful coordination depends most on personal relationships, but they noted that coordination can be disrupted when new leadership takes over at an entity. Officials at 20 of the 37 entities also stated that measuring and monitoring coordination could alleviate the process of starting over when new personnel take over at a partner entity and ensure that maintaining coordinated efforts is a priority. We concluded that a mechanism—such as performance metrics—that holds entities accountable for coordination and enables agencies to monitor and evaluate the results of their efforts could help provide the agencies with information on the effectiveness of coordination among field-based entities and help reduce any unnecessary overlap in entities’ efforts. We recommended that the agencies collaborate to develop such a mechanism. Similarly, our April 2013 report found that ONDCP, DHS, and DOJ had not assessed opportunities to implement practices that were identified as enhancing coordination. Officials at each of the 37 entities in the eight urban areas we reviewed described how practices such as serving on one another’s governance boards or, in some cases, colocating with other entities allowed or could allow them to achieve certain benefits. These include better understanding the missions and activities of the other entities, coordinating the production of analytical products, and sharing resources such as subject matter experts. In their view, this helped to increase coordination, leverage resources, and avoid or reduce the negative effects of unnecessary overlap and duplication in their analytical, tactical, and dissemination activities. We recommended that the agencies collaborate to perform a collective assessment of where these and other practices that can enhance coordination could be implemented. ONDCP and DHS concurred with both of our recommendations and DOJ generally agreed with the intent of the recommendations. Since our April 2013 report, the agencies have taken steps to address them. Specifically, ONDCP, DHS, and DOJ have existing forums they can use to work together in developing metrics and conducting assessments to better ensure coordination, and collectively monitor and evaluate results achieved. These forums include, for example, the Fusion Center Subcommittee of the Information Sharing and Access Interagency Policy Committee. In July 2015, the subcommittee met and agreed to modify its 2015 work plan to address the collection, analysis, and reporting of data pertaining to field-based information sharing entities. According to DHS officials, these data are to focus on field-based collaboration, including governance, colocation, and other information sharing, analytic, and conflict-avoidance topics. Since the July 2015 meeting, DHS has assisted ONDCP and DOJ in developing an assessment template, based on common data elements it collects in its annual assessment of state and major urban area fusion centers. Although ONDCP, DHS, and DOJ have taken actions to address our recommendations, the agencies do not yet have a collective mechanism that will hold field-based entities accountable for coordinating with one another and allow the agencies to monitor progress and evaluate results across entities. Such a mechanism could help entities maintain effective relationships when new leadership is assigned and avoid unnecessary overlap in activities, which can also help entities to leverage scarce resources. Further, the agencies have not conducted a collaborative assessment of where practices that enhance coordination can be applied to reduce overlap, collaborate, and leverage resources for their respective field-based information sharing entities. Such an assessment would allow the agencies to provide recommendations or guidance to the entities on implementing these practices. ONDCP has connected each of the systems that HIDTAs use to deconflict operations, an action that can reduce risks to officer safety and inefficiencies. Our April 2013 report found that the HIDTA and RISS programs operate three separate systems that have event or target deconfliction functions to determine when multiple federal, state, or local law enforcement agencies are conducting enforcement actions—such as raids, undercover operations, or surveillances—in proximity to one another during a specified time period. As we reported in 2013, HIDTAs have used the SAFETNet system, which has had event deconfliction functions, among other functions, since 2001 to help ensure officer safety. In 2009, the HIDTA program introduced deconfliction features into the Case Explorer system that differed from SAFETNet by providing a free service that is tied to its performance management process. In 2009, the RISS program developed RISSafe to provide event deconfliction to its members and those not being served by another system. Accordingly, HIDTAs and RISS centers were operating duplicative deconfliction systems—that is, systems that aim to ensure that law enforcement officers are not conducting enforcement actions at the same time in the same place or investigating the same target—which could pose risks to officer safety and lead to inefficiencies. Table 3 provides details about the features of these three systems. Law enforcement officers generally enter events into a deconfliction system electronically or by calling a watch center. Individuals operating a watch center plot the location of the event on a map and notify the officer for whom contact information is available in the systems of other officers who have entered conflicting events into the same system. When events are not deconflicted, officer safety can be at risk. For example, HIDTA and RISS officials described instances when officers did not deconflict drug busts, which led to undercover officers from different agencies drawing guns on one another thinking the other officers were drug dealers. The officials added that, had the events been deconflicted, the officers would have been aware of one another’s presence. As shown in figure 1, entities within a state can use one or more of the systems. In our April 2013 report, we found that HIDTA and RISS officials had taken steps to connect target deconfliction systems—those that inform agencies when they are investigating the same individuals, weapons, vehicles, or businesses—and two of three event deconfliction systems. However, HIDTA officials had not finalized plans to make the remaining event deconfliction system, SAFETNet, interoperable with the other two systems. Accordingly, we recommended that the Director of ONDCP work with the appropriate HIDTA officials to develop milestones and time frames for actions needed to make SAFETNet interoperable in order to prevent unnecessary delays in reducing risks to officer safety and lessening the burden on law enforcement agencies that are currently using multiple systems to notify agencies when they are conducting conflicting enforcement actions. ONDCP concurred with the recommendation and, in May 2015, completed the steps to achieve interoperability among the three event deconfliction systems. According to an official at the HIDTA that operates the Case Explorer deconfliction system, as of October 2015, more than 1,500 agencies are participating in the three systems. The official added that more than 159,000 events have been entered, and more than 800 events have been matched among the three systems. Chairman Meadows, Ranking Member Connolly, and members of the subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have. If you or your staff members have any questions about this testimony, please contact David Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other contributors included Eric Erdman, Assistant Director; Kevin Heinz; and Johanna Wong. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | ONDCP is responsible for coordinating the implementation of drug control policy across the federal government and funds HIDTAs that aim to support the disruption and dismantlement of drug-trafficking and money-laundering organizations. This statement addresses the extent to which ONDCP (1) has achieved Strategy goals and has mechanisms to monitor progress, (2) has assessed overlap and potential duplication across federal drug abuse prevention and treatment programs and identified coordination opportunities, (3) holds HIDTAs accountable for coordination with other field-based information sharing entities and has assessed opportunities for coordination, and (4) has connected existing systems to coordinate law enforcement activities. This statement is based on a March 2013 report ( GAO-13-333 ), an April 2013 report ( GAO-13-471 ), and selected updates as of November 2015. For the updates, GAO analyzed ONDCP documents on progress toward Strategy goals and drug abuse prevention and treatment programs and contacted ONDCP and HIDTA officials. GAO reported in March 2013 that the Office of National Drug Control Policy (ONDCP) and other agencies had not made progress toward achieving most of the goals in the 2010 National Drug Control Strategy (the Strategy) and ONDCP had established a new mechanism to monitor and assess progress. In the Strategy, ONDCP established seven goals related to reducing illicit drug use and its consequences to be achieved by 2015. As of March 2013, GAO's analysis showed that of the five goals for which primary data on results were available, one showed progress and four showed no progress. GAO also reported that ONDCP established a new monitoring system intended to provide information on progress toward Strategy goals and help identify performance gaps and options for improvement. At that time, the system was still in its early stages, and GAO reported that it could help increase accountability for improving progress. In November 2015, ONDCP issued its annual Strategy and performance report, which assess progress toward all seven goals. The Strategy shows progress in achieving one goal, no progress on three goals, and mixed progress on the other three goals. Overall, none of the goals in the Strategy have been fully achieved. ONDCP has assessed the extent of overlap and potential for duplication across federal drug abuse prevention and treatment programs and identified opportunities for increased coordination, as GAO recommended in March 2013. According to ONDCP's July 2014 assessment, these programs generally serve distinct beneficiaries in distinct settings, which helps prevent overlap and duplication. However, ONDCP found that programs that provide drug abuse prevention and treatment services to address homelessness would benefit from greater coordination. ONDCP noted that it was taking steps to address this issue. GAO reported in April 2013 that ONDCP-funded High Intensity Drug Trafficking Area (HIDTA) Investigative Support Centers and four other types of field-based information sharing entities had overlapping analytical and investigative support activities. However, ONDCP and the Departments of Homeland Security (DHS) and Justice (DOJ)—the federal agencies that oversee or provide support to the five types of field-based entities—were not holding entities accountable for coordination or assessing opportunities to implement practices that could enhance coordination, reduce unnecessary overlap, and leverage resources. ONDCP agreed with GAO's recommendations to work with DHS and DOJ to develop measures and assess opportunities to enhance coordination of field-based entities. Since July 2015, the agencies have worked through an interagency committee to make plans for collecting data on field-based collaboration, but have not yet fully addressed GAO's recommendations. ONDCP has connected each of the systems that HIDTAs use to coordinate law enforcement activities, as GAO recommended in April 2013. Specifically, GAO reported in 2013 that HIDTAs and Regional Information Sharing System centers operated three systems that duplicate the same function—identifying when different law enforcement entities may be conducting a similar enforcement action, such as a raid at the same location—resulting in some inefficiencies. In May 2015, ONDCP completed connecting all three systems, which helps reduce risks to officer safety and potentially lessens the burden on law enforcement agencies that were using multiple systems. GAO has made prior recommendations to ONDCP to assess overlap in drug prevention and treatment programs; develop measures and assess opportunities to enhance coordination of field-based entities; and connect existing coordination systems. ONDCP concurred and reported actions taken or underway to address them. GAO is not making new recommendations in this testimony. |
In March 2008, we reported that the IRIS program is at serious risk of becoming obsolete because the agency has not been able to complete timely, credible chemical assessments or decrease its backlog of 70 ongoing assessments. In addition, assessment process changes EPA had recently made, as well as other changes EPA was considering at the time of our review, would have further reduced the timeliness, credibility, and transparency of IRIS assessments. Among other things, we concluded the following: EPA was unable to routinely complete IRIS assessments in a timely manner. From 2000 to 2007, EPA completed on average about five IRIS assessments a year. The more recent trend has been a decline in productivity: In fiscal years 2006 and 2007, EPA completed two assessments each year; in 2008, EPA completed five assessments—four of which were related chemicals assessed and peer reviewed together but finalized individually; and thus far in fiscal year 2009, EPA has finalized one assessment. Further, as we reported in 2008, because EPA staff time was dedicated to completing assessments in the backlog, EPA’s ability to both keep the more than 540 existing assessments up to date and initiate new assessments was limited. We found that 48 of the 70 assessments being conducted as of December 2007 had been in process for more than 5 years—and 12 of those, for more than 9 years. These time frames have lengthened. Currently, of those 70 assessments, 58 have now been ongoing for more than 5 years—and 31 of those for more than 9 years. We also found that EPA’s efforts to finalize IRIS assessments have been thwarted by a combination of factors. These factors include (1) the Office of Management and Budget’s (OMB) requiring two additional reviews of IRIS assessments by OMB and other federal agencies with an interest in the assessments, such as the Department of Defense, and (2) EPA management decisions, such as delaying some assessments to await the results of new research. The two new OMB/interagency reviews of draft assessments involve other federal agencies in EPA’s IRIS assessment process in a manner that limits the credibility and transparency of, and hinders EPA’s ability to manage, IRIS assessments. For example, some of these agencies’ review comments could be influenced by the potential for increased environmental cleanup costs and other legal liabilities if EPA issued an IRIS assessment for a chemical that resulted in a decision to regulate the chemical to protect the public. Moreover, the input these agencies provide to EPA is treated as “deliberative” and is not released to the public. Regarding EPA’s ability to manage its IRIS assessments, in 2007 OMB required EPA to terminate five assessments that for the first time addressed acute, rather than chronic, exposure—even though EPA had initiated this type of assessment to help it implement the Clean Air Act. The changes to the IRIS assessment process that EPA was considering but had not yet issued at the time of our 2008 review would have added to the already unacceptable level of delays in completing IRIS assessments and further limited the credibility of the assessments. For example, the changes would have allowed potentially affected federal agencies to have assessments suspended for up to 18 months to conduct additional research. As we reported in 2008, even one delay can have a domino effect, requiring the assessment process to essentially be repeated to incorporate changing science. In April 2008, EPA issued a revised IRIS assessment process. As we testified before this subcommittee in May 2008, the new process was largely the same as the draft we had evaluated during our review and did not respond to the recommendations in our March 2008 report. Moreover, some key changes were likely to further exacerbate the credibility and productivity concerns we had identified. For example, EPA’s revised process formally defined comments on IRIS assessments from OMB and other federal agencies as “deliberative” and excluded them from the public record. As we have stated, it is critical that input from all parties— particularly agencies that may be directly affected by the outcome of IRIS assessments—be publicly available. In addition, the estimated time frames under the revised process, especially for chemicals of key concern, would have likely perpetuated the cycle of delays to which the majority of ongoing assessments have been subject. Instead of streamlining the process, as we had recommended, EPA institutionalized a process that from the outset was estimated to take 6 to 8 years for some chemicals of key concern that are both widespread and likely to cause cancer or other serious health effects. This was particularly problematic because of the substantial rework often required to take into account changing science and methodologies. Overall, EPA’s May 2009 IRIS assessment process reforms represent significant improvements and, if implemented effectively, would be largely responsive to the recommendations made in our March 2008 report. First, the new process and the memorandum announcing it indicate that the IRIS assessment process will be entirely managed by EPA, including the interagency consultations (formerly called OMB/interagency reviews). Under EPA’s prior process, these two interagency reviews were required and managed by OMB—and EPA was not allowed to proceed with assessments at various stages until OMB notified EPA that it had sufficiently responded to comments from OMB and other agencies. The independence restored to EPA under the new process is critical in ensuring that EPA has the ability to develop transparent, credible IRIS chemical assessments that the agency and other IRIS users, such as state and local environmental agencies, need to develop adequate protections for human health and the environment. Second, the new process addresses a key transparency concern highlighted in our 2008 report and testimonies. As we recommended, it expressly requires that all written comments on draft IRIS assessments provided during the interagency consultation process by other federal agencies and White House offices be part of the public record. Third, the new process streamlines the previous one by consolidating and eliminating some steps. Importantly, EPA eliminated the step under which other federal agencies could have IRIS assessments suspended in order to conduct additional research, thus returning to EPA’s practice in the 1990s of developing assessments on the basis of the best available science. As we highlighted in our report, as a general rule, requiring that IRIS assessments be based on the best science available at the time of the assessment is a standard that best supports the goal of completing assessments within reasonable time periods and minimizing the need to conduct significant levels of rework. Fourth, as outlined in the EPA Administrator’s memorandum announcing the new IRIS process, the President’s budget request for fiscal year 2010 includes an additional $5 million and 10 full-time-equivalent staff positions for the IRIS program, which is responsive to our recommendation to assess the level of resources that should be dedicated to the IRIS program in order to meet user needs and maintain a viable IRIS database. We are encouraged by the efforts EPA has made to adopt most of our recommendations, including those addressing EPA’s ability to manage its IRIS assessment process, transparency practices, and streamlining the lengthy IRIS assessment process. The changes outlined above reflect a significant redirection of the IRIS process that, if implemented effectively, can help EPA restore the credibility and increase the productivity of this important program. While these broad reforms provide a sound general framework for conducting IRIS assessments, the manner in which EPA implements the new process will determine whether the agency will be able to overcome its long-standing productivity problems and complete credible and transparent assessments. Specifically, management attention is warranted on certain aspects of the new process that are incomplete or lack clarity. EPA’s estimated time frames of about 2 years for standard IRIS assessments—those that are not particularly complex or controversial— do not include the time required to complete two steps that are nonetheless included in the assessment process. As a result, EPA has likely understated the time required to complete an assessment. The steps lacking time frames—the scientific literature review and the request to the public and other agencies to submit relevant research (the data call-in)— are integral to developing an assessment. In prior IRIS assessment processes, EPA provided time frames for these steps. Importantly, including the time frames for these steps would likely bring the estimated overall time for completing standard assessments closer to 3 years. We note that this more realistic time frame may be problematic because when assessments take longer than 2 years, they can become subject to substantial delays stemming from the need to redo key analyses to take into account changing science and assessment methodologies. While EPA states that some IRIS assessments may take longer because of their complexity, large scientific literature base, or high profile, the agency does not provide any guidance on likely or expected time frames for assessments of these chemicals. This is noteworthy because we found that EPA has not been able to complete assessments of the most important chemicals of concern, such as those likely to cause cancer or other significant health effects. For example, EPA’s assessment of dioxin has been ongoing for 18 years. It is critical that EPA establish time frames to enable the agency to manage complex assessments. EPA’s new process does not include a discussion of key planning steps. Specifically, it omits important preassessment steps included in prior processes—such as a call for nominations of chemicals to be assessed and the establishment of the IRIS agenda, which is list of chemicals that EPA plans to assess. Accordingly, it is not clear whether or when EPA will implement our recommendation that it provide at least 2 years’ notice of planned assessments. Among other things, doing so would give agencies and the public more advance notice of planned assessments and enable external parties with an interest in a given chemical to, for example, complete relevant research before the start of an IRIS assessment. Particularly in light of the fact that EPA’s estimates for completing assessments are likely understated, we believe that the agency should continue to look for additional opportunities to streamline its process. For example, it is not clear why EPA could not solicit comments from other federal agencies at the same time it sends the initial draft assessment to independent peer reviewers and publishes it in the Federal Register for public comment. In addition to reducing overall assessment time frames, this change could enhance transparency. Specifically, by obtaining the first draft of the assessment at the same time as the other federal agencies, the public and peer reviewers could have greater assurance that the draft had not been inappropriately biased by policy considerations of these agencies, including ones that may be affected by the assessment’s outcome, such as the Departments of Defense and Energy. Some of these agencies and their contractors could, for example, face increased cleanup costs and other legal liabilities if EPA issued an IRIS assessment for a chemical that resulted in a decision to regulate the chemical to protect the public. The new assessment process states that “White House offices” will be involved in the interagency consultation process but does not indicate which offices. Given that (1) EPA will be performing the coordinating role that OMB exercised under the prior process and (2) the purpose of these consultations is to obtain scientific feedback, it is unclear whether OMB will continue to be involved in the interagency consultation process. EPA has specified in its new assessment process that written comments provided by other federal agencies will become part of the public record. However, it is silent as to the purpose of the consultation meetings and, if applicable, whether EPA plans to document for the public record any significant oral agreements or decisions made at the consultation meetings. In order to ensure transparency and alleviate any concerns of potential bias in the assessments, it will be important for EPA to be clear on these matters. In addition to addressing these issues, the viability of the IRIS program will depend on effective and sustained management and oversight. Collectively, a number of factors that can impede the progress of IRIS assessments present significant management challenges. These include the following: Unlike a number of other EPA programs with statutory deadlines for completing various activities, no enforceable deadlines apply to the IRIS program. We have stated in previous testimonies on the IRIS program that if EPA is not able to effectively maintain this critical program, other approaches, including statutory requirements, may need to be explored. We believe the absence of statutory deadlines may contribute to EPA’s failure to complete timely IRIS assessments. For example, assessment schedules can easily be extended—and consistently are. These chronic delays in completing IRIS assessments have detrimental consequences for EPA’s ability to develop timely and scientifically sound decisions, policies, and regulations. Science and methodologies are constantly changing. Thus, there will always be a tension between assessing the best available science and waiting for more information. IRIS will remain viable only if it returns to its model of using the best science available at the time of its assessments and plans for periodic updates of assessments to identify the need for revisions. An overarching factor that affects EPA’s ability to complete IRIS assessments in a timely manner is the compounding effect of delays—even one delay can have a domino effect, requiring the process to essentially be repeated to incorporate changing science. For example, delays often require repeating reviews of the scientific literature on a chemical to take into account the time that has passed since the literature review was completed; this, in turn, may require detailed analyses of any new studies found to be relevant. Long-standing difficulties in completing assessments of chemicals of key concern—those that are both widespread and likely to cause significant health issues—stem in part from challenges by external parties, including those that may be impacted by EPA regulation of chemicals should an assessment lead to such action. Such challenges are to be expected and can be best addressed by EPA’s focusing on the best available science, credible expert review, and completing the assessments. The IRIS assessment process has been frequently changed in recent years; IRIS process reforms, such as those recently issued, are not established in a regulation or statute and thus can easily be altered. As we have reported, EPA’s continual changes present a challenge to the chemical managers who are undertaking the assessments, particularly in the absence of current operating procedures to guide chemical managers on basic procedures and program management responsibilities for the development, review, and finalization of IRIS assessments. In conclusion, EPA’s most recent changes to the IRIS assessment process appear to represent a significant improvement over the process put in place in 2008. That is, if implemented effectively, the changes may appropriately restore to EPA its control of the IRIS process, increase the transparency of the process, and streamline aspects of the process, among other things. We believe that the agency’s ability to produce timely, credible, and transparent assessments will also depend in large measure on clear implementation procedures and rigorous management oversight, given the numerous factors that can impede EPA’s ability to complete timely IRIS assessments and the lack of clarity on some aspects of the new process. Perhaps most importantly, EPA needs to hold itself more accountable to the public and Congress for carrying out this important component of its mission, especially since the IRIS program is discretionary. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact John B. Stephenson at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Christine Fishkin (Assistant Director), Laura Gatz, Richard P. Johnson, Summer Lingard, Nancy Crothers, Antoinette Capaccio, and Carol Kolarik. Scientific Integrity: EPA’s Efforts to Enhance the Credibility and Transparency of Its Scientific Processes. GAO-09-773T. Washington, D.C.: June 9, 2009. High-Risk Series, An Update. GAO-09-271. Washington, D.C.: January 2009. EPA Science: New Assessment Process Further Limits the Credibility and Timeliness of EPA’s Assessments of Toxic Chemicals. GAO-08-1168T. Washington, D.C.: September 18, 2008. Chemical Assessments: EPA’s New Assessment Process Will Further Limit the Productivity and Credibility of Its Integrated Risk Information System. GAO-08-810T. Washington, D.C.: May 21, 2008. Toxic Chemicals: EPA’s New Assessment Process Will Increase Challenges EPA Faces in Evaluating and Regulating Chemicals. GAO-08-743T. Washington, D.C.: April 29, 2008. Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA’s Integrated Risk Information System. GAO-08-440. Washington, D.C.: March 7, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) contains EPA's scientific position on the potential human health effects of exposure to more than 540 chemicals. Toxicity assessments in the IRIS database constitute the first two critical steps of the risk assessment process, which in turn provides the foundation for risk management decisions. Thus, IRIS is a critical component of EPA's capacity to support scientifically sound environmental decisions, policies, and regulations. GAO's 2008 report on the IRIS program identified significant concerns that, coupled with the importance of the program, caused GAO to add EPA's processes for assessing and controlling toxic chemicals as a high-risk area in its January 2009 biennial status report on governmentwide high-risk areas requiring increased attention by executive agencies and Congress. This testimony discusses (1) the findings from GAO's March 2008 report Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA's Integrated Risk Information System and related testimonies and (2) GAO's preliminary evaluation of the revised IRIS assessment process EPA issued on May 21, 2009. For this testimony, GAO supplemented its prior audit work with a preliminary review of the new assessment process and some IRIS productivity data. In March 2008, GAOreported that the viability of the IRIS program is at risk because EPA has been unable to complete timely, credible chemical assessments or decrease its backlog of ongoing assessments. In addition, assessment process changes EPA had recently made, and other changes itwas considering at the time of GAO's review, would have further reduced the timeliness, credibility, and transparency of IRIS assessments. Among other things, GAO found that EPA's efforts to finalize IRIS assessments have been impeded by a combination of factors, including the Office of Management and Budget's (OMB) requiring two additional reviews of IRIS assessments by OMB and other federal agencies with an interest in the assessments, such as the Department of Defense. Moreover, the two OMB/interagency reviews involved other federal agencies in EPA's IRIS assessment process in a manner that hindered EPA's ability to manage its assessments and limited their credibility and transparency. For example, the input these agencies provided to EPA was treated as "deliberative" and was not released to the public. In April 2008, EPA issued a revised IRIS assessment process. As GAO testified before this subcommittee in May 2008, the new process did not respond to GAO's March 2008 recommendations, and some key changes were likely to further exacerbate the credibility and productivity concerns GAO had identified. Overall, EPA's May 2009 IRIS assessment process reforms represent significant improvements and, if implemented effectively, would be largely responsive to GAO's March 2008 recommendations. For example, under the new process EPA is to manage the entire assessment process, including the interagency reviews. Under EPA's prior process, these reviews were required and managed by OMB--and at various stages, EPA was not allowed to proceed with assessments until OMB notified EPA that it had sufficiently responded to comments from OMB and other agencies. The independence restored to EPA under the new process will be critical to ensuring that EPA has the ability to develop transparent, credible IRIS chemical assessments. While the broad reforms provide a sound general framework for conducting IRIS assessments, the manner in which EPA implements the new process will determine whether the agency will be able to overcome its long-standing productivity problems and complete credible and transparent assessments. Specifically, certain aspects of the new process are incomplete or lack clarity and thus warrant management attention. For example, EPA has likely understated the time required to complete an assessment because its estimated time frames do not include the time required to complete two key steps. Overall, the viability of the IRIS program will depend on effective and sustained management and oversight, especially given the number of factors that can impede the progress of IRIS assessments. For example, even one delay in an assessment can have a domino effect, requiring the process to essentially be repeated to incorporate changing science. In addition, unlike some other EPA programs with statutory deadlines for completing various activities, the IRIS program is discretionary. GAO believes the absence of legal consequences for delays in completing assessments may contribute to EPA's failure to complete timely IRIS assessments. |
DOD’s National Guard and reserve personnel are assigned to the Ready Reserve, Standby Reserve, or Retired Reserve. At the end of fiscal year 2006, DOD had approximately 1.1 million guard and reserve members in the Ready Reserve. The Ready Reserve is comprised of military members of the National Guard and reserve, organized in units, or as individuals, who are subject to recall for active duty to augment the active component in time of war or national emergency. Figure 1 shows the three subcategories that exist within the Ready Reserve: the Selected Reserve, the Individual Ready Reserve, and the Inactive National Guard. As of fiscal year 2006, the Selected Reserve had a total of about 826,000 members. The Selected Reserve largely consists of units and individuals designated by their respective services that serve in an “active drilling” status. These units and individuals are required to maintain readiness through scheduled drilling and active duty for training, usually 1 weekend a month and 2 weeks a year. They also have priority for training, equipment, and personnel over all other categories of reservists. From fiscal years 2000 through 2006, about 9 percent of the Selected Reserve average strength served in the Active Guard and Reserve program as full- time reservists. These full-time reservists perform duties associated with organizing, administering, recruiting, instructing, or training the various reserve components. The Individual Ready Reserve and the Inactive National Guard do not currently have these drilling and training requirements according to DOD policy and are comprised principally of individuals who have had training, previously served in the active component or in the Selected Reserve, or have some period of their military service obligation remaining. These members may voluntarily participate in training for retirement points and promotion with or without pay. DOD’s selected reservists serve in one of six reserve components: the Army National Guard, the Army Reserve, the Navy Reserve, the Air National Guard, the Air Force Reserve, and the Marine Corps Reserve. The Army National Guard and the Air National Guard comprise what is known as the National Guard. The National Guard is unique in that it has dual missions, both federal and state; when not in federal service, it is available for use by the governor as provided by the U.S. Constitution and laws of the state. The National Guard is the only military force immediately available to a governor in times of emergency, including civil unrest and natural or manmade disasters. Under state law, the National Guard provides protection of life and property and preserves peace, order, and public safety. Since the end of the Cold War, the roles and contributions of the reserve force have changed. In the post-World War II era, DOD’s reserve operated primarily as a strategic force—a force management tool that was rarely activated. For example, from 1945 to 1989, reservists were called to active duty as part of a mobilization by the federal government only four times, an average of less than once per decade. Since 1990, reservists have been mobilized by the federal government six times, an average of nearly once every 3 years. Additionally, since September 11, 2001, reserve forces have been used extensively to support the Global War on Terrorism. In fact, about 500,000 reservists have been mobilized, primarily for contingency operations in Afghanistan and Iraq. As a result of the change in use, reserve units are becoming more integrated into military operations, calling for a new relational model between the active and reserve components, which changes the nature of reserve service to an operational role for the reserve components and requires more frequent mobilizations as well as incorporation into the total force. To attract and retain sufficient numbers and quality of guard and reserve personnel, DOD provides reserve personnel with a mix of cash, noncash, and deferred compensation based on duty status. When in part-time drilling status, reservists receive cash compensation or basic pay on a prorated basis as well as other cash incentives, such as retention bonuses or special pays for proficiency in an area. Part-time reservists are also entitled to noncash benefits, such as unlimited access to commissaries, a premium-based health care benefit for reservists and their dependents, and educational benefits. Lastly, part-time reservists who become retirement eligible qualify for retirement benefits including a pension, health care for life, and access to installation-based benefits such as exchanges. The key difference in deferred benefits between active duty servicemembers and part-time reservists is when they become eligible to receive these benefits. For example, a part-time reservist becomes eligible for a retirement annuity at age 60; in contrast, active duty servicemembers become eligible for retirement benefits at any age after completing a minimum of 20 years of service. Reservists activated for contingency operations are eligible to receive the same compensation and benefits as active duty personnel including regular military compensation—basic pay, housing and subsistence allowances, and the federal tax advantage—depending on their pay grade and years of service—and special and incentive pays. In addition, mobilized reservists are eligible for full health care benefits for themselves and their dependents. Table 1 illustrates this mix and compares it to the compensation provided to active duty servicemembers. In addition, appendix III contains more details about the compensation available for reserve personnel. The total cost to the federal government to compensate both part-time and full-time National Guard and reserve personnel increased significantly, about 47 percent, from fiscal year 2000 to fiscal year 2006. The cost increased from about $13.9 billion in fiscal year 2000 to about $20.5 billion in fiscal year 2006, as shown in figure 2. This cost includes (1) cash compensation, such as basic pay and other allowances; (2) noncash compensation, such as education assistance and health care; and (3) deferred compensation, that is, benefits that promise future compensation like retirement pay and health care. However, this cost does not include all compensation, such as accrual costs for veterans’ benefits. Over this same time period, the per capita cost to the federal government for part-time drilling reservistsalmost doubled, from about $10,100 in fiscal year 2000 to about $19,100 in fiscal year 2006, as shown in figure 3. This per capita cost is an average of what it cost the government to compensate servicemembers; it is not what the servicemembers “receive in their paycheck.” Servicemembers’ individual cash compensation will vary significantly depending on individual pay grade and other factors such as years of service or if the servicemember has dependents. The compensation cost does not include all of the cost needed to support additional servicemembers, because it does not include those costs associated with recruiting and training personnel. This increase in per capita cost occurred during a time when the average strength of part-time drilling reservists declined by about 6 percent, from about 746,400 reservists in fiscal year 2000 to 699,800 reservists in fiscal year 2006. This decline in the average number of part-time personnel may be attributed to many factors, such as the Navy’s restructuring of its force, as part of its Active-Reserve Integration process, which reduced the number of part-time reservists. Moreover, Army National Guard and Army Reserve officials attributed the decline in average strength to their recruiting difficulties. In addition to part-time reservists, about 9 percent of reservists work full- time, and their per capita cost to the federal government also increased, as shown in figure 4, from about $90,100 in fiscal year 2000 to about $115,200 in fiscal year 2006, about 28 percent. Although full-time reservists are eligible to receive the same compensation as active duty servicemembers, the per capita cost for compensation presented here is less than the per capita cost for an active duty servicemember. This is because some costs were not associated with the full-time reservists, such as accrual costs for veterans’ benefits or costs for installation-based benefits, such as exchanges and family support programs. This increase is similar to the trends in active duty per capita compensation cost (see app. I for similar data on active duty servicemembers’ compensation cost). Unlike their part-time counterparts, the full-time reservists’ average strength increased by about 9 percent during this time period, increasing from about 64,500 in fiscal year 2000 to 70,300 in fiscal year 2006. The growth in reserve compensation overall is primarily attributed to increases in deferred compensation, although cash and noncash compensation also increased. Deferred compensation represents more of overall reserve compensation costs in fiscal year 2006—increasing from 12 percent in fiscal year 2000 to 28 percent in fiscal year 2006. Specifically, deferred compensation more than tripled, increasing from 1.7 billion in fiscal year 2000 to 5.8 billion in fiscal year 2006. This increase was largely due to the increase in new health care benefits for the Medicare-eligible population, known as TRICARE for Life. Further, DOD estimates that TRICARE for Life represented 48 percent of the increase in DOD’s spending on health care from fiscal years 2000 through 2005. Retirement pay accrual also contributed to the growth in deferred compensation, and its increase was a result of across-the-board increases in the basic pay rate. Additionally, in fiscal year 2004 Congress enhanced disability retirement benefits to allow concurrent receipt—simultaneous payment—of DOD retirement pay and Department of Veterans Affairs disability benefits. Prior to this enhancement, retirees had to decide whether to receive DOD’s full, but generally taxable, retirement pay or receive the nontaxable veteran’s disability pay, which would reduce or offset dollar-for-dollar DOD’s retirement pay. As a result of this expansion of benefits, the Treasury Department, through general revenues, was required to cover the additional military retirement cost of providing concurrent receipt, which was about $251 million in fiscal year 2006. Noncash benefits also increased—about 29 percent—primarily due to increased costs for full-time reservists’ health care benefits and expanded health care benefits for part-time reservists and their families. Since fiscal year 2000, noncash benefits have represented about 11 percent of overall compensation costs. Cash compensation, including basic pay and enlistment and reenlistment bonuses, increased about 19 percent between fiscal years 2000 and 2006. This increase is largely a result of across-the-board increases in basic pay. In addition, the reserve components also experienced significant growth in the reserve incentive program. Specifically, the costs of reenlistment bonuses increased over 1000 percent, rising from about $36 million in fiscal year 2000 to almost half a billion dollars in fiscal year 2006. Cash compensation decreased from 76 percent of overall compensation costs in fiscal year 2000 to 61 percent of cost in fiscal year 2006. Table 2 provides a detailed list of the components of reserve compensation and how they have changed from fiscal year 2000 to fiscal year 2006. In addition to calculating reserve compensation cost, we also updated our previous work on active duty compensation costs and found those costs have also increased, rising from about $131.7 billion to $173.2 billion (32 percent) from fiscal year 2000 to fiscal year 2006. Cash compensation accounted for 48 percent of total active duty compensation costs, while noncash and deferred compensation accounted for 21 and 31 percent, respectively. In fiscal year 2006, it cost the federal government more than $126,000, on average, to provide annual compensation to active duty servicemembers. See appendix I for more information on active duty compensation cost. When taken together, active and reserve compensation costs have grown markedly since 2000, and these costs may not be sustainable within the context of DOD’s total budgetary needs and the nation’s increasing fiscal imbalance. Total military compensation for the active and reserve components increased from about $147 billion in fiscal year 2000 to $195 billion in fiscal year 2006—about 33 percent. Much of these increases in compensation costs are not directly driven by ongoing operations in Iraq and Afghanistan and, as a result, it is not anticipated that the costs will significantly recede after the operations in Iraq and Afghanistan subside. While some of the costs are directly related to the ongoing operations—such as the pay for mobilized reservists and enlistment and reenlistment bonuses—most of the significant increases were made to basic pay and deferred compensation, such as retirement pay and health care for retirees, which will not recede after ongoing operations are ended. An example of a recently expanded noncash compensation benefit that will not recede after the ongoing military operations are completed is the premium-based health care benefit for reservists and their dependents known as TRICARE Reserve Select. This benefit will provide a continuation of health coverage as National Guard and reserve personnel transition on and off of active duty. DOD officials anticipate that TRICARE Reserve Select will result in significant future growth in the cost of noncash compensation. According to DOD estimates, the cost for this new health care benefit will increase dramatically in fiscal year 2008 to about $381 million, and will continue to increase to about $874 million, about $1,100 per capita, by fiscal year 2013, as shown in figure 5. DOD estimates that it may be a few years before a significant number of reservists enroll in the program. However, if enrollment numbers prove higher than DOD estimated, the cost for TRICARE Reserve Select may be higher than currently projected. In addition, DOD predicts that the cost for health care will consume more than 12 percent of its total budget by fiscal year 2015, compared to 7.5 percent in fiscal year 2005. As a result, service officials have commented that the only way to control personnel costs may be to reduce the number of personnel. Moreover, total compensation costs for reservists will likely increase after contingency operations subside. According to DOD officials, after contingency operations end, the number of drills executed is expected to increase. Since fiscal year 2001, the number of executed drills and training decreased for part-time reservists. This decrease is, in part, due to the increased number of reservists called to active duty, which has left fewer reservists and units available to do their required drills. Although this may not have an impact on the per capita costs of reserve compensation, it would drive up overall costs to compensate more part-time reservists. DOD does not know the extent to which its mix of cash, noncash, and deferred compensation is meeting its human capital goals of recruiting and retaining personnel. DOD’s and Congress’ piecemeal approach to reserve compensation has created a mix of compensation that has shifted toward more deferred compensation, even though this may not be an efficient use of resources. In addition, DOD is unable to gauge the efficiency and effectiveness of the mix of reserve compensation and its compensation tools because it lacks a compensation strategy and performance measures to assess its mix of compensation. DOD and Congress have reacted to the current environment to address recruiting and retention problems by adding compensation. However, these efforts have been done in a piecemeal fashion that has shifted the mix of reserve compensation toward more deferred benefits, even though this may not be the most efficient allocation of compensation to enable DOD to meet its recruiting and retention human capital goals. Significant increases in the frequency and length of mobilizations to Iraq and Afghanistan have led to reservists being separated from their families for longer periods and potentially experiencing interruptions in their civilian careers. In fact, a recent memorandum from the Secretary of Defense indicates that reservists should expect to be mobilized on a regular cycle—with a goal of 1 year mobilized followed by 5 years nonmobilized. This change in utilization of reservists and the components’ recent recruiting and retention challenges have corresponded with Congress and DOD adding various benefits and types of pay to address recruiting problems in some reserve components and to take care of servicemembers over their lifetime. For example, the cost of the reserve incentive program, which primarily provides discretionary cash bonuses for enlistment and reenlistment, increased more than 1,000 percent from fiscal year 2000 to fiscal year 2006. According to service officials, this increase was to address potential recruiting shortfalls. The resulting complex accumulation of pays and benefits has shifted the mix of reserve compensation toward deferred compensation—that is, the promise of future compensation like retirement pay and health care. Figure 6 shows an increase in deferred compensation from 12 percent of total reserve compensation in fiscal year 2000 to 28 percent in fiscal year 2006. This shift to deferred compensation has also been observed for active duty compensation costs. See appendix I for more information on the active component’s compensation costs. Deferred compensation affects the current cost of compensation because funds must be set aside today to provide these benefits in the future, over the reservist’s lifetime. While DOD and Congress have added pays and benefits over the past 6 years, it is questionable whether there was consideration of the appropriateness of the changes, including how the changes compared to compensation in the civilian sector, what the efficiency and return of these changes would be in terms of meeting the department’s human capital goals of recruiting and retention, or whether the compensation changes were affordable and sustainable over the long term. DOD defines efficiency of its compensation system as paying no higher or lower than necessary to fulfill the basic objective of attracting, retaining, and motivating the kinds and numbers of servicemembers needed. However, this increase in deferred compensation is not necessarily the most efficient allocation, nor does it provide the best return on the compensation investment. In fact, DOD does not know the most efficient allocation of compensation needed to meet its recruiting and retention goals because it has not evaluated reserve compensation to determine the appropriate mix of compensation to attract and retain sufficient numbers of qualified personnel. Although the efficiency of noncash and deferred compensation is difficult to assess because the value servicemembers place on them is highly individualized, studies indicate cash compensation is not only preferred to noncash and deferred compensation, but it is also a more efficient recruiting and retention tool for active duty servicemembers. In our 2005 report on active duty compensation, we stated that it is generally accepted that some deferred benefits, such as retirement, are not valued as highly by servicemembers as current cash compensation. Cash pay today is a far more efficient tool than future cash or benefits for the recruiting and retention of active duty personnel. For example, a study assessing the military draw down in the early 1990s found that when active duty servicemembers were offered a choice of lump-sum cash payments or annuities, a vast majority selected the lump-sum payment, even though it had considerably less net present value. This preference for cash compensation has a profound impact on the efficiency of DOD’s compensation system, especially considering that fewer than one in four part-time reservists will receive these costly deferred benefits. More specifically, about 24 percent of those who join the guard and reserve will ultimately earn nondisability retirement pay and health care for life. Typically, deferred and noncash compensation is offered across the board, which limits the department’s flexibility to offer incentives, target personnel, or turn on and off compensation as it is needed to recruit and retain. Moreover, these changes may not be sustainable over the long term. Some of the noncash and deferred compensation that have been added in response to the department’s recruiting and retention problems are inflexible benefits and long-term costs that the department will find difficult to stop providing, such as health care for reservists. Concerns about the most efficient and effective allocation of compensation to meet recruiting and retention goals are increasingly important given the recent recruiting and retention challenges the services have faced. In November 2005, we reported that the reserve components were having recruiting and retention challenges, specifically filling certain occupation specialties such as military police. The Congressional Research Service also reported that the reserve components missed recruiting goals by 12 to 20 percent in fiscal year 2005. Although the Marine Corps Reserve and Air Force Reserve met their 2006 recruiting goals, the other reserve components missed their goals. In addition, officials expressed concern that the services will have difficulty meeting future recruiting goals. The Commission on the National Guard and Reserves reported that polling data of young people suggest that the future for recruitment remains problematic as the propensity of youth to join the military declined from 15 percent in 2005 to 10 percent in 2006. In addition to a decline in propensity to join the military, according to DOD, fewer soldiers leaving active duty are transitioning to the reserves. Moreover, the quality of recruits is declining. At a time when the nation faces an increasing fiscal imbalance, until DOD assesses what the appropriate compensation mix should be so that it uses its compensation resources in the most efficient manner possible, DOD may be unable to sustain these costs and effectively balance the department’s needs for new equipment and personnel while recruiting and retaining the future reserve force. DOD is unable to gauge the efficiency of the mix of reserve compensation and its compensation tools because it has not established a compensation strategy or performance measures. We have previously found that programs, such as compensation systems, need performance measures and goals to guide decision makers and program policy. Moreover, DOD’s Personnel and Readiness strategic plan states the importance of DOD identifying requirements and tailoring compensation and other programs to achieve objectives and continuously reviewing personnel management. In addition, we have also reported that it is necessary for an agency to monitor and evaluate its progress toward its human capital goals and the contribution that human capital outcomes have made toward achieving program results. “the relationships between the individual components of compensation and their systemic interrelationships as a coherent structure remain largely implicit rather than explicit. Virtually every aspect of military activity has explicit doctrines, principles, and practices embodied in field manuals, technical manuals, and various joint publications. Military compensation is noteworthy in its lack of such an explicit intellectual foundation.” Moreover, DOD does not have performance measures to gauge the efficiency of its compensation system or the various compensation tools. Performance measures are used to evaluate how closely a program’s achievements are aligned with program objectives, and to assess whether a program is achieving its intended outcome. DOD and Congress have generally increased all types of compensation—adding more benefits while increasing bonuses—making it impossible to determine the relative value of each of these initiatives. Without these measures DOD does not know which of its compensation tools—cash, noncash, or deferred— works best for recruiting and retaining personnel, and it does not know the most effective, efficient mix of compensation. Determining the return on investment for compensation and the impact of compensation on recruiting and retention is not an easy task and should be approached with caution. DOD and service officials often point to meeting end strength or recruiting and retention goals as evidence that compensation is appropriate or working. Although end strength is an important indicator, we do not believe it is sufficient alone. Meeting recruiting and retention goals does not indicate if the compensation system is efficient or yielding the best return on the department’s investment. There are numerous other factors, such as the economy, ongoing contingency operations, and DOD’s own recruiting and advertising program, that also influence the department’s ability to recruit and retain servicemembers. As a result, DOD does not know if the additions to the compensation system—which are becoming increasingly costly, rising 47 percent from fiscal year 2000 to 2006—are appropriate to ensure the reserve components recruit and retain a high-quality workforce in sufficient numbers and that the federal government has the best return on investment. In DOD’s response to this report, the department emphasized that it has not sought some of the increases in deferred and noncash compensation that Congress has recently given to servicemembers. Also, in our discussions with DOD officials, they told us that the department has focused on cash compensation in recent years and, in some cases, has opposed increases in deferred compensation. For example, the Secretary of Defense stated, in May 12, 2004, testimony before the Senate Appropriations Defense Subcommittee that, in recent years, Congress has often added entitlement-like changes, beyond DOD’s recommendations, which concentrated on those who have already served. The Secretary of Defense’s statement pointed out the fiscal effects of these decisions by stating that entitlements such as TRICARE for life are increasing substantially the permanent costs of running the department with only modest effect on recruiting and retaining personnel. Nevertheless, DOD has not formally assessed the appropriate mix of compensation and has not developed a written policy or document that specifies the department’s overarching strategy for compensation. Until DOD establishes a strategy for determining the best mix of cash, noncash, and deferred compensation and develops performance measures to evaluate the efficiency of compensation tools, DOD and Congress will be unable to make informed decisions about which compensation tools will provide the best return on investment, be sustainable in the long-term, and be effective in recruiting and retaining the future reserve force. Decision makers in Congress and DOD do not have adequate transparency over total costs for providing reserve compensation—including the allocation of costs to cash, noncash, and deferred compensation—and the cost of mobilized reservists. Good business practices require adequate transparency over investments of resources, especially in times of fiscal constraint. However, today there is no single source where decision makers can go to see all the costs of reserve compensation. In addition, the cost of mobilized reservists is also not transparent. Part of the lack of transparency is due to the fact that about a quarter of the costs of reserve compensation fall outside the military personnel appropriation for DOD. In fact, costs are located within three federal agencies—DOD, Department of Veterans Affairs, and Department of the Treasury—depending on the type of compensation and the duty status of the reservists—active reserve or mobilized, as shown in figure 7. Furthermore, within DOD, compensation costs are found in four different budgets—the reserve components’ military personnel, active components’ military personnel, active components’ operation and maintenance, and the Defense Health Program. Most of the cash costs—such as basic pay, allowances, and special pays and incentives—are located in either the reserve or active military personnel budgets, depending on whether the reservist is mobilized. In addition, the reserve military personnel budgets combine some cash costs. For example, pays and allowances include such costs as retired pay accrual, basic allowance for subsistence, basic allowance for housing, and special and incentive pay as authorized. Furthermore, some noncash costs are located in the active operation and maintenance budget and active and reserve military personnel budgets. Some of these noncash costs, such as those for commissary and morale, welfare, and recreation facility use, are not broken out by active and reserve costs because use of these facilities is open to both components. Moreover, deferred costs for health care for the Medicare-eligible retirees and their dependents are found in the Defense Health Program budget, while some of the costs for concurrent receipt of disability retirement from DOD and Veterans Affairs are found in the Treasury budget. Furthermore, we had to calculate some costs for reserve compensation because they were not captured in any budget documents. To do this, DOD provided, at our request, the accrual costs for future retirees and their dependents. Similarly, we estimated the tax expenditure for the federal government from the nontaxable compensation provided to servicemembers. We estimated that the cost for tax expenditures for full- time reservists alone was $436 million in fiscal year 2006. In addition, the Department of Veterans Affairs does not calculate the accrual cost for veterans’ benefits for reservists and we did not attempt to calculate these costs either because reservists are likely to be eligible for the majority of these benefits based upon active duty service. In appendix I, we present the accrual costs for active duty veterans’ benefits that we calculated using data from the 1999 President’s Budget. Comparable information for the reserve components was not available. As a result, these costs are unknown. This lack of information makes it difficult for decision makers to see the full costs of all the compensation pays and benefits provided to reservists. Transparency over compensation costs is further limited when reservists are mobilized because mobilized reservists are paid from active duty budgets. Moreover, compensation costs for mobilized reservists are difficult to determine within the active components’ budgets, in part, because they have been paid out of the supplemental funding the active components receive for the global war on terrorism. The absence of information about the compensation costs of mobilized reservists further dilutes decision makers’ ability to see the full picture of the costs of reserve compensation to the federal government. In addition, as mobilizations are expected to become a regular part of reservists’ careers, these costs will become a part of doing business for the reserves, which increases the importance of being able to identify them. DOD is taking measures to address some of these problems. For example, DOD required the services to include detailed cost estimates of reserves called to active duty in the fiscal year 2007 and 2008 supplemental submissions. In addition, DOD is working on a system to consolidate personnel and pay systems for all active and reserve components, known as the Defense Integrated Military Human Resources System. This consolidation may improve transparency of DOD costs by integrating all human resource information for active, guard, and reserve personnel of all the services. However, as we reported in 2005 and 2006, this task is proving to be difficult to complete. We found that the services have unique requirements that are limiting the flexibility to consolidate to a single solution. Furthermore, service officials told us that this system is unlikely to improve transparency over budgeted costs. In 2005, we recommended that DOD compile the total costs to provide military compensation and communicate these costs to decision makers within the administration and Congress. Despite our recommendation, DOD has not compiled in one place, that is readily accessible, the total costs for active or reserve personnel compensation, including mobilized reservists, and the allocation of these costs among cash, noncash, and deferred compensation. Such a compilation could enable decision makers to accurately assess these costs and to manage the total force as well as efficiently and effectively make fact-based human capital adjustments. Some steps have been taken to improve transparency and recognition appears to be growing about the effect of rising compensation costs. For example, the Office of Management and Budget appears to have recognized the need for greater transparency over compensation costs. For the first time, in its Analytical Perspectives for fiscal year 2008, the Office of Management and Budget described the total cost of DOD active duty compensation and its allocation to cash, noncash, and deferred compensation. The Analytical Perspectives document also describes significant growth in per capita compensation in recent years. However, this analysis is submitted separately, and is part of a more than 400-page document that accompanies the budget but is not part of the military budget submission. In addition, in its February 2007 report on federal budget options, the Congressional Budget Office discussed the option of consolidating military personnel costs in a single appropriation. The report stated that the consolidation of compensation costs would not only provide more complete information about how much money is being allocated in support of military personnel, but it would also give DOD managers a greater incentive to use resources wisely. Until total costs for reserve compensation are compiled in a transparent and easily accessible manner, decision makers will be unable to determine the affordability and efficiency of the reserve compensation system. Knowing these costs is especially important given the growing fiscal challenges the country faces. DOD and Congress have reacted to the dramatic shift from a strategic to an operational reserve by adding compensation without adequate consideration of how the additions compare with civilian sector compensation; whether they are appropriate, affordable, and sustainable over the long term; or their return on investment in terms of recruiting and retention. Looking forward, DOD officials are concerned about their ability to manage personnel costs, because so much of the costs are in entitlements—items that managers have little to no control over, such as retirement pay and health care. As a result, it is highly questionable whether the increasingly costly compensation system is affordable, sustainable, and fiscally sound over the long term. This challenge is especially acute given the nation’s increasingly constrained fiscal environment and DOD’s need to balance its personnel costs with its desire for new equipment and infrastructure. Without assessing what the appropriate compensation mix should be, DOD will be unable to ensure that it uses its compensation resources most efficiently. Moreover, until DOD establishes a compensation strategy on which to base changes in compensation and performance measures to gauge the efficiency of changes to the compensation system, DOD will be unable to use its compensation resources in the most effective and efficient manner, which ultimately could negatively affect DOD’s ability to recruit and retain a highly qualified force in sufficient numbers. In addition to the lack of an underpinning compensation strategy, the lack of transparency over compensation costs makes it difficult to make fact- based decisions about the efficiency and effectiveness of adjustments to the compensation system, and in broader terms adjustments to the total force. A complete picture of total compensation cost for reserve personnel includes the costs for those reservists who are mobilized as well as the costs for cash, noncash, and deferred compensation. Without an inclusive display of all the reserve compensation costs, DOD will not be able to determine the magnitude of funding and potential for current investments and operations to turn into long-term financial commitments, thus prompting real questions about the affordability and sustainability of the rate of growth in defense spending. Understanding the total cost of military compensation can provide DOD and Congress with important information as they make assessments on compensation matters, and it also allows decision makers to make informed trade-offs among competing demands for such things as force structure, equipment acquisition, and infrastructure decisions. Moreover, as DOD embraces the change in the use of reservists to an operational force mobilized more regularly, the traditional use of reservists as “weekend warriors” becomes less realistic. In today’s environment reservists will likely be activated regularly during their career—and those associated compensation costs are likely significant. Taken together, the lack of a compensation strategy, performance measures, and transparency limits decision makers’ ability to make fact-based decisions about the appropriateness of the mix and level of compensation provided to reservists. To improve the appropriateness of the reserve compensation system and to gain transparency over total reserve compensation costs, we recommend that the Secretary of Defense take the following actions: Establish a clear compensation strategy that includes performance measures to evaluate the efficiency of compensation in meeting recruiting and retention goals, and use the performance measures to monitor the performance of compensation and assess what mix of compensation will be most efficient in the future. Compile the total costs to provide reserve compensation for part-time, full-time, and mobilized reservists and communicate these costs as well as the allocation of these costs among cash, noncash, and deferred compensation to decision makers within the administration and Congress—perhaps as an annual exhibit as part of the President’s budget submission to Congress. As future changes are considered to pay and benefits for National Guard and reserve personnel as well as veterans, Congress should consider the long-term affordability and sustainability of these changes, including the long-term implications for the deficit and military readiness. We provided the Department of Veterans Affairs and DOD a draft of this report for review and comment. The Department of Veterans Affairs agreed with the statements in the report as they pertain to the department and had no formal comments on the report. DOD’s comments are reprinted in this report as appendix IV. DOD partially concurred with our recommendations, but had several technical comments, which we have incorporated where appropriate. DOD partially concurred with our first recommendation to establish a clear compensation strategy and use performance measures to monitor and assess the mix of compensation. DOD noted that the department has consistently communicated its approach to Congress in Congressional testimony and that DOD has sponsored efforts, such as the Defense Advisory Committee on Military Compensation, to assess its overarching compensation strategy. DOD also pointed out that it has generally not sought increases in deferred and noncash compensation, and stated during congressional testimony the department’s preference for cash compensation. We believe that DOD’s argument that Congress has mandated changes to compensation that it did not seek further illustrates why the department needs to develop an explicit compensation strategy and performance measures. As we point out in this report, a compensation strategy could be used to underpin the department’s compensation decisions and performance measures to track their effectiveness. Furthermore, the department would be in a better position to make business case arguments for or against changes to its compensation system, and provide fact-based evidence regarding the efficiency of the allocation of cash, noncash, or deferred compensation. DOD also partially concurred with our second recommendation to compile total costs to provide reserve compensation for both drilling and mobilized reservists and communicate those costs to decision makers within the administration and Congress. In its response to this report, DOD stated that this recommendation may be more appropriate for the Office of Management and Budget since the costs extended to multiple federal departments. We made a similar recommendation to the department in our July 2005 report on active duty compensation. Since our 2005 report, the Office of Management and Budget published a compilation of active duty compensation costs and the allocation of cost to cash, noncash, and deferred compensation in its fiscal year 2008 Analytical Perspectives. In addition, the department noted that it has discussed with the Office of Management and Budget the possibility of expanding the information to include Guard and reserve compensation costs. Such actions represent steps in the right direction. However, placing the information in the 400- plus page Analytical Perspectives document that accompanies the budget may not be as effective as an annual budget exhibit included as part of the military budget request. While we believe OMB has taken a step in the right direction, we continue to believe that DOD is in the best position to exercise ownership over total compensation costs and, accordingly, should compile and present total compensation costs as part of its budget submission. As we stated in our report, lack of transparency over compensation costs is, in part, due to the fact that DOD lacks a single source to illustrate total compensation costs for drilling, full-time, and mobilized reservists. We continue to believe that compilation of costs in a single source is an important first step in gaining transparency over total reserve compensation costs. This type of compilation would provide decision makers with a resource to make fact-based decisions about future changes to compensation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (404)679-1900 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff members who made key contributions to this report are listed in appendix V. We originally reported total active duty compensation costs to the federal government in July 2005. This appendix updates the total compensation costs in that report. As shown in table 1, adjusted for inflation, the total cost for providing active duty compensation increased from about $131.7 billion to $173.2 billion (32 percent) from fiscal year 2000 to fiscal year 2006. Cash benefits accounted for 48 percent of total compensation costs, while noncash and deferred benefits accounted for 21 and 31 percent, respectively. In fiscal year 2006, it cost the federal government more than $126,000, on average, to provide annual compensation to active duty servicemembers. Three things are important to remember about our estimate. First, it is an average of what it cost the government to compensate servicemembers, not what the servicemembers “receive in their paycheck.” Second, agencies other than Department of Defense (DOD), such as the Department of Veterans Affairs, Department of Education, and Department of Labor, provide compensation to servicemembers, so our estimate includes their appropriated costs. Third, the estimate does not include the cost of adding servicemembers, because it does not include costs for acquiring and training personnel. To calculate the cost to the federal government of compensating active duty servicemembers, we interviewed officials from DOD, Department of Veterans Affairs, Department of Labor, Department of Education, Office of Management and Budget, and the Congressional Budget Office. We analyzed and compiled data for fiscal years 2000-2006 from the Army, Air Force, Marine Corps, and Navy’s military personnel and operations and maintenance budget justification books. We also reviewed and compiled data from the Department of Veterans Affairs benefits and health care budget justification books. To estimate the total federal tax expenditure that results from the tax-exempt housing and subsistence allowances military personnel receive, we used the National Bureau of Economic Research’s TAXSIM Model to simulate tax liabilities under different scenarios. To estimate health care accrual costs, we used official DOD estimates of accrual health care costs for all retirees and their dependents. In addition, DOD’s Office of Health Affairs provided us the estimated cost of health care for active duty servicemembers and their dependents for fiscal years 2004-2006. To calculate the costs of future veterans’ benefits for current active duty servicemembers, including the costs for health care, compensation, pension, and other types of benefits, we used notional costs as a percentage of basic pay for accruing and actuarially funding Department of Veterans Affairs benefits in the DOD budget. Lastly, we used deflators to adjust the budget appropriations into current fiscal year 2006 dollars. For more detailed information on our methodology, see appendix II. To determine how Guard and reserve servicemembers have been compensated, we analyzed relevant regulations and legislation since 2000, identified changes in compensation policy, and compiled a list of pays and benefits for which reservists are currently eligible. We then used budgetary data to assign costs to the various pays and benefits of the reserve compensation system. This included compiling data for fiscal years 2000-2006 from the Army National Guard, Army Reserve, Air National Guard, Air Force Reserve, Marine Corps Reserve, and Navy Reserve’s military personnel and operation and maintenance budget justification books. Within the operation and maintenance justification books, we reviewed the budgets of the defense health program; the defense commissary agency; the morale, welfare, and recreation activities (OP-34 exhibit); and DOD dependent education activity. We also reviewed data from the Department of Veterans Affairs benefits and health care budget justification books. In addition, we interviewed DOD officials in Washington, D.C., from the offices of (1) the Assistant Secretary of Defense for Reserve Affairs; (2) the Comptroller within the Office of Secretary of Defense; (3) each of the national guard and reserve components, excluding the Coast Guard; (4) the Actuary; (5) Health Affairs; and (6) the Office of Program Analysis and Evaluation. We also interviewed officials from the Department of Veterans Affairs in Washington, D.C., the Office of Management and Budget in Washington, D.C., and the Congressional Budget Office in Washington D.C. All of the associated costs for the reserves could not be found in the budgetary exhibits. In some instances, we requested data from the appropriate federal agency. For example, the Office of Health Affairs provided (1) per capita TRICARE cost estimates for full-time administration and support personnel for fiscal years 2004 through 2006; (2) cost estimates for TRICARE Reserve Select (TRS) for fiscal year 2005 and fiscal year 2006; and (3) projected cost estimates for TRS for fiscal year 2008 to fiscal year 2013. To calculate the health care cost for full-time reservists for fiscal years 2000 through 2003, we relied on our previous health care cost estimates for active duty personnel. The Office of the Actuary provided the costs for the retired health care accrual. Although the Department of Veterans Affairs provided assistance, we determined that the portion of reserve deferred or acrual cost associated with most veterans’ programs could not be identified without creating an accrual model. In other instances, we found it necessary to estimate the cost to the federal government. For example, we used the data from the Office of the Actuary’s Valuation Report to calculate the Department of Treasury’s contribution to disability compensation accrual. We also estimated total federal tax expenditures that resulted from tax-exempt housing and subsistence allowances received by military personnel in 2005 and 2006. To do this we estimated the taxes owed by an active duty servicemember for every combination of years of service, rank, and family size, with and without the tax exemption for housing and subsistence. The number and pays of servicemembers were provided by DOD’s Selected Military Compensation Tables. Only military income was used to calculate the taxes owed. We calculated taxes owed for each group using the National Bureau of Economic Research’s TAXSIM model, which simulates tax liabilities under different scenarios. We calculated the tax expenditure as the difference between the taxes owed without the tax exemptions and with the tax exemptions. We determined the average tax expenditure for active duty servicemembers by computing an average based on the number of servicemembers in each category. We applied these results to the active duty compensation cost. Next, to estimate the percentage difference in tax expenditure between active duty and full-time reservists, we computed the ratio of average tax expenditure in 2006 based on the distribution of years of service and ranks for full-time reservists and active duty servicemembers. When computing the taxes owed by reservists, we assumed that the family size of full-time reservists was the same as active duty servicemembers of identical rank and years of service. Data for reservists were taken from the Official Guard and Reserve Manpower Strength and Statistics. We applied these results to the compensation cost of full-time reservists. Additionally, to estimate future health care costs for the current reserve population when they retire, we used official estimates of health care accrual costs for servicemembers older than 65 (Medicare eligible) and younger than 65 (non-Medicare eligible). DOD’s Office of the Actuary provided the per capita normal costs for postretirement medical benefits, that is, the present value of the current year's attributed portion of future benefits for active personnel and their eligible dependents. The 2000-2002 per capita normal costs were provided by DOD’s Office of the Actuary based on data from a report prepared by Milliman USA Consultants and Actuaries. Per capita normal costs for 2003 and 2004 were based on data from Milliman’s spreadsheets. The 2005 and 2006 per capita normal costs come from the DOD Office of the Actuaries’ valuations as of September 30, 2004, and September 30, 2005, respectively. Finally, when calculating aggregate costs for the various types of compensation, we used military personnel deflators from the National Defense Budget Estimates for fiscal year 2007 published by the Office of the Under Secretary of Defense (Comptroller) to adjust the budget appropriations into current fiscal year 2006 dollars. To aid our analysis, we classified the types of compensation into three categories: cash, noncash, and deferred. In addition, we classified the reserve service population into two categories: full-time and part-time. Our analysis produced per capita costs for each category of the population. For the full- time per capita cost, we used the average strength identified in the military personnel budget justification books for the administration and support population as the denominator. The average strength, of Pay Groups A (reservists assigned to units), B (reservists designated as Individual Mobilization Augmentees), F (reservists completing initial entry training), and P, located in the military personnel budget justification books was adjusted by subtracting the average number of mobilized Select Reservists from each fiscal year to approximate the actual number of part-time drilling or “active” reservists. This “normalized” strength was used as the denominator for the part-time per capita cost. The office of the Assistant Secretary of Defense for Reserve Affairs provided assistance with querying the Contingency Tracking System managed by DOD’s Defense Manpower Data Center to identify the monthly number of reservists serving on active duty orders for named contingencies by reserve component. To assess the reliability of Contingency Tracking System data, we interviewed knowledgeable officials about the system and related internal controls, and we reviewed our prior work on the reliability of Contingency Tracking System data. To assess the reliability of the analysis (monthly deployment totals) produced by the Defense Manpower Data Center from the Contingency Tracking System, we reviewed the SAS program that generated the results. We determined the data we received were reliable for there intended purpose in this engagement. To assess the extent to which DOD’s mix of cash, noncash, and deferred compensation helped DOD meet its human capital goals, we reviewed the requirements for establishing program objectives and outcome measures in federal government standards such as the Government Performance and Results Act of 1993 as well as in GAO guidance and strategies for human capital management. We reviewed applicable DOD directives, policy, and guidance and interviewed DOD officials from each of the reserve components, as well as representatives from the Office of the Secretary of Defense and Reserve Affairs in Washington, D.C., to assess whether DOD had established any strategies for compensation or outcome measures to determine the efficiency of its compensation tools. We interviewed DOD recruiting and retention officials to determine the extent to which compensation is used to attract and retain reserve personnel. These interviews were conducted with officials in the U.S. Army Accessions Command at Ft. Knox, Kentucky; U.S. Army Retention Command at Ft. McPherson, Georgia; and Air National Guard Office of Recruiting and Retention in Arlington, Virginia. We also reviewed reports done by DOD, the Congressional Research Service, the Commission on the National Guard and Reserves, the private sector, and others on recruiting and retention of reserve forces as well as commonly accepted economic theory. To assess the extent to which DOD’s approach provided transparency over total costs to the federal government and determine whether DOD followed those directives, policy, and guidance, we reviewed how compensation costs are presented to decision makers by analyzing the budget justification books and comparing them to applicable directives, policy, and guidance, such as the DOD Financial Management Regulation. We also interviewed knowledgeable officials at the National Guard Bureau (for the Army and Air National Guard), the Air Force Reserve Budget Division, the Army Reserves (Office of the Comptroller), Marine Corps Reserves, and Navy Reserve Budget offices in Washington, D.C. to learn more about transparency issues. We compared DOD directives and guidance to standards such as the Government Performance and Results Act and commonly accepted economic theory. To calculate the cost to the federal government of compensating active duty servicemembers, we interviewed officials from DOD including the Office of the Secretary of Defense, Under Secretary for Personnel and Readiness’ office of compensation, the Office of the Comptroller within the Office of the Secretary of Defense and the Services, the Office of the Actuary, and Health Affairs all in Arlington, Virginia. In addition, we interviewed officials from Department of Veterans Affairs, Department of Labor, Department of Education, Office of Management and Budget, and the Congressional Budget Office, all in Washington, D.C. We examined and compiled data for fiscal years 2000-2006 from the Army, Air Force, Marine Corps, and Navy’s military personnel and operations and maintenance budget justification books. Within the operations and maintenance justification books, we reviewed the Defense Health Program, the Defense Commissary Agency, the morale, welfare and recreation (OP-34 exhibit), and DOD dependent education activity budgets. We also reviewed and compiled data from the Department of Veterans Affairs benefits and health care budget justification books. We used deflators to adjust the budget appropriations into current fiscal year 2006 dollars. To calculate the costs of future veterans’ benefits for current active duty servicemembers, including the costs for health care, compensation, pension, and other types of benefits, we used notional costs as a percentage of basic pay of accruing and actuarially funding Veterans Affairs benefits in the DOD budget. The notional cost percentages we used were unofficial Office of Management and Budget estimates. These estimates were based on the most recent official percentages shown in table 12-2 of the 1999 President’s Budget. Lastly, all active duty compensation costs are not presented in budgets. As a result, we estimated the total federal tax expenditure and our methods are described as part of the reserve compensation costs on page 39. We requested DOD’s Office of the Actuary provide health care accrual costs as described in appendix I. We also requested that DOD’s Office of Health Affairs provide health care cost estimates for active duty servicemembers and their dependents for fiscal years 2004-2006. We relied on our previously calculated active duty health care costs for fiscal years 2000 through 2003. We conducted our review from October 2006 through May 2007 in accordance with generally accepted government auditing standards. Active duty for training (ADT) Amount is determined by the member’s years of service and pay grade. Amount is determined by the member’s years of service and pay grade. Amount is determined by the member’s years of service and pay grade. Amount is determined by the member’s years of service and pay grade. If ADT is 31 days or more, the servicemember is eligible. The amount is determined by the member’s pay grade, location of permanent duty station, and dependent status. Amount is determined by member’s pay grade, location of permanent duty station, and dependent status. Amount is determined by member’s pay grade, location of permanent duty station, and dependent status. (Enlisted servicemembers may receive rations- in-kind) Eligible, amount depends on if the servicemember is an officer or enlisted. Eligibile, amount depends on if the servicemember is an officer or enlisted. Eligible, amount depends on if the servicemember is an officer or enlisted. Eligible, amount depends on if the servicemember is an officer or enlisted. Eligible, limited to members with dependents during permanent change station when family cannot accompany member or temporary duty status (unaccompanied for 31 days or more). Eligible, limited to members with dependents during permanent change of station when family cannot accompany member or temporary duty status (unaccompanied for 31 days or more). Eligible, for members with dependents, if member meets statutory and regulatory requirements. Active duty for training (ADT) Housing and subsistence allowances are nontaxable benefits. In addition, all pay, allowances, and bonuses earned during a month while in a combat zone and qualified hazardous duty area for 1day of the month are nontaxable. Officers’ monthly tax exclusion of military pay earned is capped at the highest enlisted monthly basic pay plus $225 for hazardous fire and Imminent Danger Pay (IDP). Housing and subsistence allowances are nontaxable benefits. In addition, all pay, allowances, and bonuses earned during a month while in a combat zone and qualified hazardous duty area for 1day of the month are nontaxable. Officers’ monthly tax exclusion of military pay earned is capped at the highest enlisted monthly basic pay plus $225 for hazardous fire and Imminent Danger Pay. Housing and subsistence allowances are nontaxable benefits. In addition, all pay, allowances, and bonuses earned during a month while in a combat zone and qualified hazardous duty area for 1day of the month are nontaxable. Officers’ monthly tax exclusion of military pay earned is capped at the highest enlisted monthly basic pay plus $225 for hazardous fire and Imminent Danger Pay. Housing and subsistence allowances are nontaxable benefits. In addition, all pay, allowances, and bonuses earned during a month while in a combat zone and qualified hazardous duty area for 1day of the month are nontaxable. Officers’ monthly tax exclusion of military pay earned is capped at the highest enlisted monthly basic pay plus $225 for hazardous fire and Imminent Danger Pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, at 1/30 of the authorized rate for each IDT period, if the member meets the statutory and regulatory requirements of the pay, except for fire fighters. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, at 1/30 of the authorized rate for each IDT period, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Active duty for training (ADT) Eligible, at 1/30 of the authorized rate for each IDT period. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, at 1/30 of the authorized rate for each IDT period, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, at 1/30 of the authorized rate for each IDT period, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, at 1/30 of the authorized rate for each IDT period, if the member meets the statutory and regulatory requirements of the pay. Eligible. However, the Secretary of the military department may authorize payment only to those reservists on active duty in excess of 180 days. Eligible. However, the Secretary of the military department may authorize payment only to those reservists on active duty in excess of 180 days. Eligible. However, the Secretary of the military department may authorize payment only to those reservists on active duty in excess of 180 days. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible for officers only, for each day of duty that meets the statutory and regulatory requirements of the pay. Eligible for officers only, if the member meets the statutory and regulatory requirements of the pay. Eligible for officers only, if the member meets the statutory and regulatory requirements of the pay. Eligible for officers only, if the member meets the statutory and regulatory requirements of the pay. Eligible for officers only, if the member meets the statutory and regulatory requirements of the pay. Active duty for training (ADT) Eligible when assigned to a designated mission. Eligible when assigned to a designated mission. Eligible when assigned to a designated mission. Eligible, if the member meets the statutory and regulatory requirements for the pay. Eligible from the first day if assigned permanently to a designated location or if assigned temporarily to a designated location for more than 30 consecutive days, payable from first day. Eligible from the first day if assigned permanently to a designated location or if assigned temporarily to a designated location for more than 30 consecutive days, payable from first day. Eligible from the first day if assigned permanently to a designated location or if assigned temporarily to a designated location for more than 30 consecutive days, payable from first day. Eligible if the member meets the statutory and regulatory requirements for the pay. Eligible at 1/30 of the authorized rate for each duty day. Eligible at 1/30 of the authorized rate for each duty day. Eligible at 1/30 of the authorized rate for each duty day. Eligible at 1/30 of the authorized rate for each duty day. Eligible at 1/30 of the authorized rate for each duty day. Eligible, medical officers are authorized $450/month and dental officers $350/month. Eligible, medical officers are authorized $450/month and dental officers $350/month. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible, if the member meets the statutory and regulatory requirements of the pay. Eligible for officers who agree to fill critical shortfalls. Eligible for officers who agree to fill critical shortfalls. Eligible for officers who agree to fill critical shortfalls. Eligible for officers who agree to fill critical shortfalls. Eligible for officers who agree to fill critical shortfalls; however, DOD policy prohibits. Active duty for training (ADT) Limited to Selected Reserve who enlist to fill critical shortfalls. Limited to Selected Reserve who enlist to fill critical shortfalls. Limited to Selected Reserve who enlist to fill critical shortfalls. Limited to Selected Reserve who enlist to fill critical shortfalls. Limited to Selected Reserve who enlist to fill critical shortfalls; however, DOD policy prohibits.. Limited to Selected Reserve who reenlist to fill critical shortfalls. Limited to Selected Reserve who reenlist to fill critical shortfalls. Limited to Selected Reserve who reenlist to fill critical shortfalls. Limited to Selected Reserve who reenlist to fill critical shortfalls. Limited to Selected Reserve who reenlist to fill critical shortfalls; however, DOD policy prohibits.. Option available to Active Guard and Reserves with 15 years of service who first became a member after August 1, 1986 and elect to accept the lump sum bonus and to remain under the Redux retirement plan. Eligible for treatment of injury, illness or disease incurred or aggravated in line of duty. Eligible, if 30 days or less, for treatment of injury, illness or disease incurred or aggravated in line of duty and if 31 days or more it is the same as active duty. Eligible for treatment of injury, illness or disease incurred or aggravated in line of duty. provided at a military treatment facility. Eligible for treatment provided at a military treatment facility. Eligible, if ADT is 30 days or less; however, if ADT is 31 days or more then dental care and treatment provided at a military treatmentfacility. treatment provided at a military treatment facility. treatment provided at military treatment facility. Active duty for training (ADT) Active duty (other than training) for 31 days or more member is ordered to active duty for training for 31 days or more. Eligible, at military facilities if space is available or the servicemember may select from TRICARE Standard, Extra, Prime or Prime Remote. Eligible, at military facilities if space is available or the servicemember may select from TRICARE Standard, Extra, Prime or Prime Remote. Eligible, for 180 days, if involuntarily separated from active duty. Eligible, for 180 days, if involuntarily separated from active duty or if released from active duty while serving in a contingency operation for a period of more than 30 days. Eligible, for 180 days, if involuntarily separated from active duty. Eligible, if member meets eligibility requirements and enrolls, Eligible, if member meets eligbility requirements and enrolls. Eligible, if member meets eligibility requirements and enrolls. Eligible, however, may have limited availability for various services. Eligible, however, may have limited availability for various services. Eligible, however, may have limited availability for various services. Eligible, however, may have limited availability for various services. Eligible, however, may have limited availability for various services. Active duty for training (ADT) Eligible for some services but not all. Eligible, but availability of space may be limited. Eligible, but availability of space may be limited. Eligible, but availability of space may be limited. Eligible, but availability of space may be limited. Eligible to travel between the member’s home and place of training. Eligible for the member only within the continental United States of America. Eligible for the member and dependents to all locations. Eligible for the member and dependents to all locations. Eligible for the member and dependents to all locations. Eligible for the member and dependents to all locations. Subject to the availability of legal staff resources. Subject to the availability of legal staff resources. Subject to the availability of legal staff resources. Subject to the availability of legal staff resources. Selected Reserve members are automatically enrolled and may decline coverage. Selected Reserve members are automatically enrolled and may decline coverage. Coverage continues or eligible to enroll. Coverage continues or eligible to enroll. Coverage continues or eligible to enroll. Eligible, if the member is enrolled in Servicemember Group Life Insurance. Eligible, if the member is enrolled in Servicemember Group Life Insurance. Eligible, if the member is enrolled in Servicemember Group Life Insurance. Eligible, if the member is enrolled in Servicemember Group Life Insurance. Eligible, if the member is enrolled in Servicemember Group Life Insurance. Selected Reservists are automatically enrolled if the reservist has Servicemember Group Life Insurance coverage. Selected Reservists are automatically enrolled if the reservist has Servicemember Group Life Insurance coverage. Selected Reservists are automatically enrolled if the reservist has Servicemember Group Life Insurance coverage. Selected Reservists are automatically enrolled if the reservist has Servicemember Group Life Insurance coverage. Automatically enrolled if the reservist has Servicemember Group Life Insurance coverage. Eligible, if ADT is 31 days or more. Active duty for training (ADT) Eligible, if ADT is 31 days or more. Up to 60 days in career. If on ADT for more than 30 days but less than 365 days, may sell unused leave in excess of the 60-day career limit. Eligible, up to 60 days in career, but if on active duty for more than 30 days but less than 365 days, may sell unused leave in excess of the 60-day career limit. Eligible, up to 60 days in career. Requires a 6-year Selected Reserve service agreement. Requires a 6-year Selected Reserve service agreement. Requires a 6-year Selected Reserve service agreement. Requires a 6-year Selected Reserve service agreement. Requires a 6-year Selected Reserve service agreement. Enlisted must be or have been Montgomery GI Bill- Selected Reserve eligible and serve in a critically undermanned skill. Member must agree to a 6-year service obligation. Enlisted must be or have been Montgomery GI Bill- Selected Reserve eligible and serve in a critically undermanned skill. Member must agree to a 6-year service obligation. Enlisted must be or have been Montgomery GI Bill- Selected Reserve eligible and serve in a critically undermanned skill. Member must agree to a 6-year service obligation. Enlisted must be or have been Montgomery GI Bill- Selected Reserve eligible and serve in a critically undermanned skill. Member must agree to a 6-year service obligation. Enlisted must be or have been Montgomery GI Bill- Selected Reserve eligible and serve in a critically undermanned skill. Member must agree to a 6-year service obligation. Scholarship program for Reserve Officers Training Corps participants. Scholarship program for Reserve Officers Training Corps participants. Scholarship program for Reserve Officers Training Corps participants. Scholarship program for Reserve Officers Training Corps participants. Not eligible. Restricted to officers in designated health professions with obligatory periods of military training. Restricted to officers in designated health professions with obligatory periods of military training. Restricted to officers in designated health professions with obligatory periods of military training. Restricted to officers in designated health professions with obligatory periods of military training. Restricted to officers in designated health professions with obligatory periods of military training. Tuition Assistance Available to Selected Reservist members of the Army National Guard and Army Reserve. Eligible, but an officer must agree to serve on active duty or full-time National Guard duty, unless waived by the Secretary concerned. Eligible , but an officer must agree to serve on active duty or full-time National Guard duty, unless waived by the Secretary concerned. Eligible , but an officer must agree to serve on active duty or full-time National Guard duty, unless waived by the Secretary concerned. Eligible , but an officer must agree to serve on active duty or full-time National Guard duty, unless waived by the Secretary concerned. Active duty for training (ADT) Eligible upon completion of 90 consecutive days of active service in support of a contingency operation. Eligible upon completion of 90 consecutive days of active service in support of a contingency operation. Eligible upon completion of 90 consecutive days of active service in support of a contingency operation. Eligible upon completion of 90 consecutive days of active service in support of a contingency operation. Eligible upon completion of 90 consecutive days of active service in support of a contingency operation. Initial allowance paid to enlisted who are not issued clothing and officers at the beginning of their service. Initial allowance paid to some enlisted and officers at the beginning of their service. Initial allowance paid to enlisted some and officers at the beginning of their service. Some enlisted, and officers called to active duty for more than 90 days may receive an extra clothing allowance. Eligible if orders are for 20 weeks or more. Eligible, if completed 6 years of honorable reserve service. Eligible, if completed 6 years of honorable reserve service. Eligible, if completed 6 years of honorable reserve service. Eligible, if completed 6 years of honorable reserve service. participation points per year for a minimum of 20 years. Eligible to receive pay and health care at age 60. Must earn 50 participation points per year for a minimum of 20 years. Eligible to receive pay and health care at age 60. Must earn 50 participation points per year for a minimum of 20 years. Eligible to receive pay and health care at age 60. Must earn 50 participation points per year for a minimum of 20 years. Eligible to receive pay and health care at age 60. Must have a minimum of 20 qualifying years to be eligible for retirement pay and benefits. Eligible, if the disability was incurred or aggravated in the line of duty while: performing inactive duty training (IDT), traveling directly to or from the IDT site, remaining overnight immediately before or between successive IDT periods. Eligible, if the disability was incurred or aggravated in the line of duty. Eligible, if the disability was incurred or aggravated in the line of duty. Eligible, if the disability was incurred or aggravated in the line of duty. Eligible, if the disability was incurred or aggravated in the line of duty. Active duty for training (ADT) Generally not eligible unless member qualifies based on service in active military or mobilization for a contingency operation. Generally not eligible unless member qualifies based on service in active military or mobilization for a contingency operation. Generally not eligible unless member qualifies based on service in active military or mobilization for a contingency operation. Generally not eligible unless member qualifies based on service in active military or mobilization for a contingency operation. A member qualifies based on active duty status. Special and incentive pays are intended to compensate members for more hazardous conditions than usually experienced in peacetime and provide incentives for certain career fields that would otherwise experience manpower shortfalls. Bonuses are intended to provide services with a flexible tool for targeting particular skills and address critical manpower shortfalls. There are a variety of programs offering varying levels of coverage depending on duty status and enrollment. The John Warner National Defense Authorization Act for Fiscal Year 2007 authorized the enhancement of TRICARE Reserve Select by making all Selected Reserve members and dependents eligible for the program at 2 percent of the premium. The change to the program is scheduled to take effect by October 2007. The insurance programs listed are supervised by the Department of Veterans Affairs. Enrollment in the programs requires the payment of a premium. The insurance may be bought in increments of $50,000 up to a maximum of $400,000. Children are insured at $10,000 at no additional cost. Up to $100,000 coverage can be purchased for a spouse. Traumatic injury insurance provides immediate financial assistance to traumatically injured servicemembers so their families can travel to be with them during an often extensive recovery and rehabilitation process. Payments range from $25,000 to $100,000 depending on the type and severity of injury. Unlike Montgomery GI Bill-Active Duty, for reservists meeting the eligibility requirement, the benefit is automatic and the members are not required to make a payment. The Montgomery GI Bill-Reserve Component benefit amount is smaller than the Montgomery GI Bill –Active Duty benefit. The majority of reservists have prior service in the active component. Since fees for active duty applications are lower than reserve applications, reservists with active duty prior service usually apply based on their prior active duty service. Both DOD and Veterans Affairs provide disability compensation. Since 2005, disabled veterans have been eligible to receive disability compensation from Veterans Affairs and disability retirement from DOD. The Department of Defense (DOD) made comments on the presentation of the data in our report and raised a number of technical concerns. Our response to DOD’s technical comments follows. 1. DOD commented that we did not adequately describe the impact of the increase in funding related to the Global War on Terrorism. In its comments, DOD stated that in fiscal year 2006 more than $15.6 billion of the $173.2 billion in compensation costs were supplemental funding for the Global War on Terrorism. While it is true that our estimates include supplemental funding, we do not believe that the inclusion of this funding changes our findings or conclusions. Costs paid from supplemental funding represents real costs to the federal government that we believe are appropriate to include when calculating how much the federal government spends on compensating military servicemembers. However, we understand DOD’s concern, and footnotes in the report that explain our approach are sufficient. Furthermore, we believe that DOD’s comment illustrates the importance of providing greater transparency over mobilized reservists’ costs. As we have previously testified, with Global War on Terrorism costs likely to continue for the foreseeable future, it is becoming increasingly important that DOD move those costs into the baseline budget as the level of effort becomes better known and is more predictable. Greater transparency over costs would provide administration and congressional decision makers more information to make fact-based decisions and weigh competing priorities for the nation’s resources. 2. DOD did not disagree with our overall finding that active duty compensation costs to the government have increased since fiscal year 2000. However, DOD stated that our reliance on end strength numbers for active duty personnel distorts active duty per capita costs calculations because that number does not include mobilized reservists. We did note in the report that mobilized reservists are paid out of active duty cash compensation costs and that our active duty per capita cost estimates do not take these mobilized reservists into account, and acknowledge in the report that the per capita costs to provide compensation would be lower if these mobilized reservists were taken into consideration. DOD’s concern further highlights the need for the department to establish greater transparency over the costs of reservists. As we state in this report, accounting for mobilized reservists is problematic, given that they count against reserve end strength numbers but are paid out of active duty accounts. 3. DOD raised concerns about the adjustments we made in our data to account for inflation, and felt that the deflators, or price indices, we chose understated the real growth in compensation costs between fiscal years 2000 and 2006. We are aware that the price indices we used make our growth estimates conservative and that other indices would show similar or greater growth. We recognize that in examining the growth of military compensation over time, the division of this growth between real growth and growth due to inflation depends on the price index or deflator used to adjust for inflation. For example, when dividing total growth in compensation between real growth and growth due to inflation, a higher rate of inflation will produce a lower real growth rate, and vice versa. We used the deflators for military pay that are contained in the National Defense Budget Estimates for fiscal year 2007 that are published by the Office of the Under Secretary of Defense of the Comptroller because they represent the official DOD indices for military pay budget matters. This office produces several different deflators or price indices that DOD uses officially to adjust dollars amounts for inflation for different budgetary purposes, such as procurement or operations and maintenance. We recognize, as DOD suggested in its comments, that we could have used the Employment Cost Index or the Consumer Price Index (for Urban Wage Earners and Clerical Workers) to adjust for inflation. Although DOD suggested the Employment Cost Index for wages and salaries would have been a more appropriate price index, we would have used the Employment Cost Index specific for total compensation for the private sector, because the military compensation number we calculated included more than wages and salaries. For example, in addition to wages and salaries we also included such things as allowances for housing and subsistence and retirement pay accrual. As we previously stated in our report, we used the military pay deflator and found that reserve compensation costs grew at a real rate of 47 percent from fiscal year 2000 to fiscal year 2006. When we redid our calculations using the Employment Cost Index, we found that reserve compensation costs grew 48 percent from fiscal year 2000 to fiscal year 2006. When we redid our calculations using the Consumer Price Index, we found that reserve compensation costs grew 55 percent from fiscal year 2000 to fiscal year 2006. However, we note that DOD does not use this index to prepare its military personnel budgets. 4. DOD commented that the calculation of the part-time reservists should have included pay groups for initial entry reservists in the average strength. We agree and changed our data to include the pay groups for the initial entry reservists in the average strength. 5. DOD commented that the department made a conscious decision to improve the transparency of compensation costs of mobilized reservists starting in the fiscal year 2007 supplemental submission. We agree that the services have taken the first step of displaying these costs as part of the supplemental request; however, we believe the department would benefit from greater transparency over these costs, including presenting them as part of a complete picture of compensation costs in the military personnel justification books. 6. The sentence in question does not discuss whether or not reserve retirement pay has changed or whether DOD sought the TRICARE Reserve Select program; it simply provides examples of deferred compensation. However, we altered the sentence for clarity to reflect that we were talking about health care benefits for retirees. 7. We adjusted the language for clarification to reflect the fact that not living in close proximity to military bases does not effect eligibility. 8. We adjusted the language as suggested. 9. We changed the sentence to reflect the fact we are referring to mobilized personnel only. 10. We specify in the introduction of the report that we limited our scope to, part-time, drilling reservists, and full-time support personnel serving in the Active Guard and Reserve program of the Selected Reserve. We excluded reserve and guard members who were mobilized from our cost estimates. We defined our scope based on cost information presented by DOD in the budget justification books. These books present the cost of part-time and Administration and Support personnel. 11. Our Results in Brief now show that DOD has testified that adding deferred compensation is not their preference. 12. We excluded mobilized reservists from our cost estimates because their costs are not presented in the reserve justification books. However, we believe that the total costs of the Guard and reserve should include the cost for mobilized reservists and that the department should take steps to provide greater transparency over all compensation costs for decision makers to make fact-based decisions. 13. We added language to clarify that mobilized reservists are paid out of supplemental funding. 14. We addressed the comment by adding a footnote to the section. 15. We reordered the presentation of the reserve components as suggested. 16. Agency officials told us that ongoing operations had been part of the reason for the increase in full-time reservists during the course of the review. However, we deleted the sentence as requested in the department’s formal comments. 17. We defined our use of the term full-time reservists to mean Active Guard and Reserve in the introduction of the report. 18. This statement is referring to reserve compensation costs as they are currently presented to decision makers in the justification books. However, we added “reserve” into the sentence for further clarification. 19. The report did not state that DOD has sought increases in deferred compensation, so no change was needed. 20. The sentence refers to the cost of the program to the government and not to bonuses received by individuals. 21. We added a statement to acknowledge that DOD has not sought some recent additions to deferred compensation, specifically TRICARE for life. However, DOD has not formally assessed the appropriate mix of compensation and has not developed a written policy or document that specifies the departments overarching strategy for compensation. 22. While it is true that DOD has sponsored a study assessing the use of compensation under a reserve continuum of service concept, we continue to believe that DOD has not developed any performance measures to regularly and systematically assess all types of compensation. The study points out the effectiveness of targeted compensation. This is an example of the foundation for the compensation strategy we are recommending DOD formalize. 23. The focus of our reports in 2005 and 2007 was on the difficulties with rolling out the Defense Integrated Military Human Resources System rather than on any particular service system. 24. DOD provided separate enclosures, in addition to its agency comments, that provided technical comments on tables in our report. We made those changes as suggested. In addition to the contact named above, David Moser, Assistant Director; Lori Atkinson, Ben Bolitzer, Renee Brown, Linda Keefer, Susan Langley, Julia Matta, Erin Noel, Charles Perdue, Rebecca Shea, Kathryn Smith, and Sonja Ware made key contributions to this report. Military Personnel: DOD Needs Action Plan to Address Enlisted Personnel Recruitment and Retention Challenges. GAO-06-134. Washington, D.C.: November 17, 2005. Defense Health Care: Health Insurance Stipend Program Expected to Cost More Than TRICARE But Could Improve Continuity of Care for Dependents of Activated Reserve Component Members. GAO-06-128R. Washington, D.C.: October 19, 2005. Military Personnel: DOD Needs to Improve the Transparency and Reassess the Reasonableness, Appropriateness, Affordability, and Sustainability of Its Military Compensation System. GAO-05-798. Washington, D.C.: July 19, 2005. Military Personnel: Preliminary Observations on Recruiting and Retention Issues within the U.S. Armed Forces. GAO-05-419T. Washington, D.C.: March 16, 2005. DOD Systems Modernization: Management of Integrated Military Human Capital Program Needs Additional Improvements. GAO-05-189. Washington, D.C.: February 11, 2005. Military Personnel: A Strategic Approach Is Needed to Address Long- term Guard and Reserve Force Availability. GAO-05-285T. Washington, D.C.: February 2, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Military Personnel: DOD Needs More Data Before It Can Determine if Costly Changes to the Reserve Retirement System Are Warranted. GAO- 04-1005. Washington, D.C.: September 15, 2004. Military Personnel: Survivor Benefits for Servicemembers and Federal, State, and City Government Employees. GAO-04-814. Washington, D.C.: July 15, 2004. Military Personnel: Active Duty Compensation and Its Tax Treatment. GAO-04-721R. Washington, D.C.: May 7, 2004. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: Information on Selected National Guard Management Issues. GAO-04-258. Washington, D.C.: December 2, 2003. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Personnel: DOD Needs to Assess Certain Factors in Determining Whether Hazardous Duty Pay Is Warranted for Duty in the Polar Regions. GAO-03-554. Washington, D.C.: April 29, 2003. Military and Veterans’ Benefits: Observations on the Concurrent Receipt of Military Retirement and VA Disability Compensation. GAO-03-575T. Washington, D.C.: March 27, 2003. Military Personnel: Preliminary Observations Related to Income, Benefits, and Employer Support for Reservists During Mobilizations. GAO-03-573T. March 19, 2003. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002. Military Personnel: Higher Allowances Should Increase Use of Civilian Housing, but Not Retention. GAO-01-684. Washington, D.C.: May 31, 2001. Defense Health Care: Observations on Proposed Benefit Expansion and Overcoming TRICARE Obstacles. GAO/T-HEHS/NSIAD-00-129. Washington, D.C.: March 15, 2000. Unemployment Insurance: Millions in Benefits Overpaid to Military Reservists. GAO/HEHS-96-101. August 5, 1996. The Congress Should Act to Establish Military Compensation Principles. GAO/FPCD-79-11. Washington, D.C.: May 9, 1979. | The Department of Defense (DOD) has increasingly relied on reserve personnel to carry out its military operations. Congress and DOD have taken steps to enhance reserve compensation, such as improving health care benefits. Concerns exist, however, that rising compensation costs may not be sustainable in the future, especially given the nation's large and growing long-range fiscal imbalance. Under the statutory authority of the Comptroller General to conduct work on his own initiative, GAO (1) reviewed how much it has cost the federal government to compensate reserve personnel since fiscal year 2000; (2) assessed the extent to which DOD's mix of cash, noncash, and deferred compensation has helped DOD meet its human capital goals; and (3) evaluated the extent to which DOD's approach to reserve compensation provides transparency over total cost to the federal government. To address these objectives, GAO analyzed budget data and relevant legislation and also interviewed appropriate officials. GAO focused this review on part-time reservists and full-time, active guard and reserve. Using fiscal year (FY) 2006 constant dollars, the federal government's total cost to compensate part-time and full-time reserve personnel has increased 47 percent since FY 2000, rising from about $13.9 billion in FY 2000 to about $20.5 billion in FY 2006. Most reservists are part-time, and their per capita compensation costs nearly doubled from about $10,100 in FY 2000 to about $19,100 in FY 2006. Additionally, a small percentage of reservists work full-time, and their per capita costs increased about 28 percent from FY 2000 to FY 2006. Cash compensation, which servicemembers see in their "paycheck," has increased about 19 percent. However, much of the total growth in compensation is driven by the costs for deferred compensation. These costs tripled over this period, primarily attributed to enhanced health care benefits. Moreover, DOD officials anticipate significant continued growth in health care costs because of the expansion of health care coverage to reserve personnel in FY 2007. DOD does not know the extent to which its mix of pay and benefits meets its human capital goals in part because it lacks an established compensation strategy to identify the appropriate mix of reserve compensation to maintain its force. DOD and Congress have added pay and benefits using a piecemeal approach that has not been based on an established strategy and that has not adequately considered the appropriateness, affordability, and sustainability of the related costs. These additions have contributed to a shift in the mix of compensation toward more deferred benefits--that is, future compensation such as retirement pay and health care for life. Deferred benefits increased from 12 percent of total reserve compensation in FY 2000 to 28 percent of total compensation in FY 2006. This increase in deferred compensation may not be the most efficient allocation given that fewer than one in four of those who join the reserve will ultimately earn nondisability retirement pay and health care for life. Moreover, DOD does not know the efficiency and effectiveness of these changes in meeting its recruiting and retention goals because it does not have performance measures. Without performance measures, DOD cannot determine the return on its compensation investment or make fact-based choices on how its compensation resources should be allocated. DOD's approach to reserve compensation does not provide decision makers in Congress and DOD with adequate transparency over total cost for reservists--including the allocation of costs to cash, noncash, and deferred compensation, as well as the cost for mobilized reservists. Despite the fact that sound business practices require adequate transparency over investments of resources, currently costs are found in multiple budgets within three federal departments. Until total reserve compensation costs are compiled in a transparent manner--and decisions are based on established compensation strategies--decision makers will be unable to determine the affordability, cost effectiveness, and ultimately the sustainability of the reserve compensation system. Increased transparency is especially important given the growing fiscal challenges the country faces. |
In 2007, GAO reported on issues associated with the proliferation of biosafety laboratories in the United States. In 2009, we noted that while proliferation of these laboratories was taking place in the federal, academic, and private sectors across the United States, the federal oversight of these laboratories was fragmented—there was not a single federal agency to provide oversight. As a result, numerous federal agencies could be involved in separate and independent inspections of these entities and their associated laboratories. The various agencies that have a role in the oversight of select agent entities and can conduct inspections are shown in figure 1. CDC and APHIS have regulatory authority to assess compliance with biosafety, biosecurity, and biocontainment requirements. Under the current SAR, entity registration must be renewed every 3 years, and CDC or APHIS may conduct an on-site inspection before the award of a new certificate of registration or the renewal of an existing registration. These inspections generally cover all aspects of the SAR. As a matter of policy, to ensure that the entity is compliant with the SAR, CDC or APHIS inspects the premises and records of applicants, including a review of all required plans, before issuing the initial certificate of registration. In addition, CDC or APHIS may conduct inspections to (1) respond to concerns about an entity’s compliance, (2) verify corrections of deficiencies identified through inspections or accomplishment of Performance Improvement Plan goals; or when (3) modifications are made to the entity’s registration, (4) a new building or laboratory is added, (5) a higher-risk agent or toxin is added, (6) a change is made in security infrastructure or policy and procedures, (7) a theft, loss, or release incident occurs, or (8) a violation is reported. Any entity where select agents are possessed, used, or transferred must allow CDC and APHIS to inspect, with or without prior notification. In addition to CDC or APHIS inspections, the SAR requires each registered entity to conduct annual self-inspections under the direction of the entity’s RO. DOT has the primary authority to regulate the safe and secure transport of all hazardous materials shipped intrastate, interstate, and in foreign commerce. Infectious substances, which include select agents, are regulated as hazardous materials by DOT. DOT regulates select agents in commercial transportation to, from, and within the United States, and its oversight extends to all parts of the hazardous materials transportation system, including classification of materials, packaging, handling, moving, loading, and unloading of hazardous materials shipments in commerce. PHMSA is the component of DOT responsible for this oversight. As its authority is limited to transportation, its focus is on the shipping and packaging aspects of biosafety and biosecurity requirements. Consequently, entities that are active in the transfer of select agents may have their shipping and handling facilities inspected by PHMSA. DHS and DOD own or fund research at select agent entities, and these entities may undergo additional inspections from these agencies as part of the conditions of funding or based on the safety and security policies of the parent agency. For example, entities that receive funds from DHS to conduct laboratory work involving select agents are subject to on-site compliance reviews and inspections by the DHS Regulatory Compliance Office. In addition, select agent laboratories operated by DOD are subject to inspections by the service IGs and other commands. DHS established a regulatory compliance program to facilitate department-wide implementation of and compliance with DHS policies for biosafety, select agent security, and the care and use of animals in research. DHS’s select agent research is subject to regulatory oversight by CDC and APHIS. In addition, entities that receive funding from DHS to conduct laboratory work involving select agents are subject to on-site compliance reviews and inspections based on DHS Management Directives. According to DHS, it “conducts significant additional oversight because of unique sensitivities related to biodefense research, as distinct from conventional public health research, and a desire to ensure complete transparency for senior management of the department about all ongoing biodefense efforts.” DHS also has responsibility for ensuring biosecurity compliance of DHS funded research under biodefense weapon treaties. Through its long-standing Chemical and Biological Defense Program, the DOD supports research on detection, identification, and characterization of biological threats and the development of countermeasures against those threats. DOD research activities take place at numerous facilities, including military-owned entities as well as entities in academia and private industry supported by contracts. DOD entities that are registered with CDC or APHIS are required to follow SAR requirements as well as service-specific requirements derived from DOD requirements. DOD- related select agent entities can therefore be subject to inspections from CDC, APHIS, the service IGs, and other commands. Specifically, the Department of the Army Inspector General (DAIG) conducts inspections of five Army and contractor-owned entities, the Navy Medical IG (MEDIG) conducts inspections of two Navy laboratories located in the United States and three overseas, and the Air Force Materiel Command IG conducts inspections of one Air Force select agent facility. In addition to DAIG inspections, Army facilities are also subject to command/program office (PEO) reviews by (1) Army Material Command Surety Management Review Team, (2) Army Test and Evaluation Command (ATEC) Surety Management Review Team, (3) Army Medical Command (MEDCOM) Surety Management Review Team, and (4) Joint Program Office for Chemical and Biological Defense (JPEO-CBD) Surety Management Review Team. The DAIG and command/PEO teams each have a 2-year inspection cycle and stagger the inspections so that each entity is inspected once per year. Other federal agencies, such as the NIH and USDA’s Agriculture Research Service, have their own internal offices that may perform inspections in addition to those performed by CDC or APHIS as part of the SAP. In many cases, these agencies have internal regulations or policies that are more prescriptive than the CDC or APHIS regulations, according to the Trans-Federal Task Force. In addition, as the agency responsible for the general oversight of workplace safety in the United States, OSHA has oversight authority for the safety and health of workers in all workplaces that fall under its jurisdiction, including individuals who work with hazardous biological agents or toxins in high- and maximum- containment research facilities. This includes jurisdiction over the safety and health of workers employed by privately owned entities and, under certain circumstances, federal high-containment facilities. Inspection activities are generally conducted in three phases: (1) preparation, (2) execution (the actual inspection), and (3) closeout (postinspection activities). In order to prepare for an inspection, entities conduct a variety of activities, such as responding to requests for documents related to the SAR; reviewing and updating guidance, records, and plans; and checking security and medical certifications of inspectors. In the execution phase, inspection activities include, for example, tours of the facility, document reviews, inventory audits, interviews of laboratory staff, equipment tests, and examinations of physical security and shipping and receiving of select agents. The closeout phase includes activities such as discussing the inspection findings and report, developing corrective action plans, and providing verification of corrective actions. (For a detailed list of inspection activities included in the survey, see app. II.) Inspections consist of an extensive review of laboratory safety and security. CDC and APHIS use specific checklists, which they developed from the SAR, OSHA regulations, NIH guidelines for recombinant DNA research, and the Biosafety in Microbiological and Biomedical Laboratories (BMBL) manual, to guide their inspections. The BMBL provides guidance for standards of practice for laboratory principles, practices, and procedures. Other inspecting agencies also use the same or similar checklists for their inspections. DOT inspections, for example, are derived from PHMSA’s regulations for hazardous materials shipping, packaging, testing, certification, safety and security, and record keeping. The scope of a DOD inspection covers a wide range of functional areas, such as security, safety and occupational health, surety management, emergency response, occupational medical requirements, and transportation. The number of federal agencies involved in inspections of select agent entities increases the potential for overlap and duplication. Further adding to the potential for overlap is the increase in the number of inspections. These inspections have increased for a variety of reasons, such as heightened security concern in response to the events of September 2001, as well as an increase in select agent research and the number of agencies funding the research. For example, in addition to announced inspections that generally take place with registration certification and renewal, the number of unannounced CDC inspections substantially increased from fewer than 5 in 2006–07 to nearly 80 in 2012, according to estimates from CDC. And while some federal agencies have increased the use of joint inspections, in which two agencies are on-site for an inspection at the same time, it is unclear whether this will address all of the negative aspects of duplication if compliance is still assessed separately. About 15 percent of the 374 entities that were registered to work with select agents between fiscal years 2009 and 2011 were subject to inspection overlap. This means that in a 2-year period, these entities were inspected by more than one federal agency for biosafety, biosecurity, and biocontainment compliance. In addition, specific inspection activities were often duplicative, according to our survey of the 55 entities that were subject to overlapping inspections. While the percentage of entities affected by inspection overlap was relatively small, the entities affected tended to be larger ones that work with a greater variety of agents and with more laboratories, principal investigators, and laboratory staff. For example, entities with five or more laboratories are more likely to be subject to overlap than entities with fewer than five laboratories, controlling for biosafety level (BSL) and the number of agents. Similarly, entities working with five or more select agents are more likely to be subject to overlap than entities working with fewer than four select agents, controlling for the number of laboratories and BSL. As a result, the overlap affected roughly a third of all laboratories, principal investigators, and lab staff. Moreover, DOD-owned select agent entities are also subject to inspections from internal inspecting entities. These inspections can overlap with inspections from CDC, APHIS, and the service IGs. Concerning duplication, the entities often prepare the same documents for inspectors to review; conduct the same facility tours, inventory inspections, and personnel interviews; and go through the same exit conference and corrective action plan processes, according to survey results. Inspections conducted by DOT, however, were generally not duplicative of CDC and APHIS inspections because specific inspection activities tended to differ. According to surveyed entities, DOT inspections tended to focus narrowly on transportation issues—such as (1) checking hazardous material and transportation security plans and (2) verifying the labeling, testing, and assembly of United Nations (UN) certified packagings—rather than general biosafety and biosecurity compliance. DHS and DOD’s DAIG inspections tended to be more duplicative with those of CDC and APHIS, in that numerous activities were the same across these inspections, according to survey results. Fifty-five, or 15 percent, of entities registered between fiscal years 2009 and 2011 were subject to overlapping inspections. Although the overlap appears to affect only a small portion of registered entities, these entities are home to 645 laboratories—roughly a third of all laboratories involved in select agent work (see fig. 2 for details). On the basis of logistic regression analysis, we found that the likelihood that an entity was subject to overlap in federal inspections depends on the number of laboratories, the highest BSL of its laboratories, and the number of select agents. Specifically, entities with five or more laboratories are more likely to be subject to overlap than entities with fewer than five, controlling for BSL and the number of agents. In addition, entities with at least one BSL-4 lab are more likely to be subject to overlap than entities without a BSL-4 lab, controlling for the number of laboratories and the number of select agents. In addition, entities working with five or more select agents are more likely to be subject to overlap than entities working with fewer than four select agents, controlling for the number of laboratories and BSL. Finally, entities subject to overlap have, on average, more staff and more principal investigators (see fig. 3). Overlap in inspections occurred most frequently between CDC or APHIS and (1) DOT (34 overlapping inspections), (2) DAIG (13 overlapping inspections), and (3) DHS (8 overlapping inspections). Considering only CDC inspections, overlap occurred with 29 DOT inspections, 13 DAIG inspections, and 3 DHS inspections. However, overlapping inspections do not necessarily result in duplication. While the overlap in inspections occurs most frequently between CDC or APHIS and DOT, these inspections are not necessarily duplicative because the specific inspection activities tend to differ. The activities are likely to differ because the purpose of DOT inspections is narrowly focused on issues and areas related to the transport of select agents, whereas CDC and APHIS inspections are more-broadly focused. While there is less-frequent overlap between CDC or APHIS inspections and those of DHS or DOD, these inspections tend to be more duplicative, in that the inspection activities are similar, according to survey results. According to DOT inspection data, DOT’s PHSMA conducted 83 inspections of select agent entities between fiscal years 2009 and 2011. DOT coordinates its inspections with CDC and APHIS under a working Memorandum of Understanding (MOU), which spells out the framework and responsibilities for the exchange and protection of information on transfer of select agents. Under the MOU, CDC and APHIS annually provide DOT with a list of registered entities that have transferred select agents during the 12 preceding months. DOT, in turn, notifies CDC and APHIS of any anticipated inspection of an entity 30 days before the inspection and also provides them with the inspection results. According to DOT officials, they try to coordinate joint inspections whenever possible. After CDC provides a list of scheduled inspections, DOT informs CDC about which inspections it will join. According to DOT officials, where possible, they coordinate joint inspections with CDC because (1) they learn things from each other, (2) with both agencies there at the same time it is more encompassing view of the process, and (3) it brings greater sophistication to inspections. Such coordination appears effective, given survey results of entities that experienced DOT inspections that overlapped with CDC or APHIS inspections. These entities reported little duplication in specific activities in the preparation, execution, and closeout phases of inspections (see table 2). Among the activities that tended to be duplicative were verifying the medical, security, or other credentials of inspectors in the preparation phase; participating in interviews with inspectors in the execution phase; and holding exit conferences with the inspecting agency in the closeout phase. Among the activities that tended to be DOT-specific were checking hazardous materials and transportation security plans and verifying the labeling, testing, and assembly of UN- certified packagings. DHS identified 42 government, university, private, and not-for-profit entities that were receiving DHS funding for work involving select agents. DHS’s Regulatory Compliance Office conducted or participated in 19 on- site inspections of 13 of these entities between fiscal years 2009 and 2011. Of those 19 inspections, 8 were joint inspections, including: 5 joint inspections with CDC; 1 joint inspection with APHIS; and 2 joint inspections with both CDC and APHIS. According to DHS officials, there are numerous reasons for DHS to visit a select agent entity that has also been inspected by CDC or APHIS. These reasons include, for example, CDC or APHIS placing the entity on a Performance Improvement Plan or making substantial recommendations to correct regulatory noncompliance. If it appears that the compliance issues identified by CDC or APHIS could affect the DHS program, DHS may also make a site visit to understand the effect on DHS research and to assist the entity in mitigating the effect through appropriate corrective action. According to DHS officials, their inspections are fundamentally different from CDC and APHIS inspections. DHS describes its compliance inspections as broader in some ways than those of CDC or APHIS, because they are designed to ensure that DHS-sponsored research activities not only comply with select agent requirements, but with other relevant regulations and guidelines as well. According to DHS officials, CDC and APHIS, as regulatory agencies, conduct more comprehensive inspections at the institutional level. DHS compliance inspections, however, go beyond compliance with select agent regulations to include general biosafety, animal care and use, research protocols and procedures, institutional review and oversight (for example, Institutional Biosafety Committees), and adherence to best practices. Nevertheless, survey responses from entities that experienced DHS inspections that overlapped with CDC or APHIS inspections reported some duplication in specific activities in the preparation, execution, and closeout phases of inspections (see table 3). Among the activities that tended to be duplicative were verifying medical, security, or other credentials of inspectors and arranging staff availability in the preparation phase; participating in interviews with inspectors in the execution phase; and holding exit conferences and developing and implementing corrective action plans in the closeout phase. According to DOD inspection data, the DAIG conducted 16 biosurety inspections of its Army and contractor-owned entities between fiscal years 2009 and 2011, 5 of which were conducted jointly with CDC. Of the 16 DAIG inspections, 13 overlapped with an inspection from another federal agency. The Navy IG conducted one inspection and the Air Force IG conducted two inspections between 2009 and 2011. These inspections did not overlap with an inspection from another federal agency. With respect to specific activities in the preparation, execution, and closeout phases of inspections, entities that experienced DAIG inspections that overlapped with CDC or APHIS inspections also reported substantial duplication in inspection activities (see table 4). Among the inspection activities that tended to be duplicative were arranging staff availability in the preparation phase, holding entry meetings and escorting inspectors in the execution phase, and holding exit conferences in the closeout phase. However, according to DAIG officials, as required by Army directives and regulations, its biosurety inspections are more- comprehensive than CDC and APHIS inspections, covering biosafety, biosecurity, biocontainment, personnel reliability program (PRP), transportation, occupational medical requirements, and emergency response exercises. While DAIG officials acknowledge a level of overlap with CDC and APHIS inspections in terms of verifying compliance with standards, DOD and the Department of the Army have developed specific requirements to implement those standards. In addition, until recently, CDC and APHIS inspections did not look at facility or department-specific requirements such as PRP. DOD-related select agent entities are also subject to inspections and reviews from internal organizations, which can overlap with inspections from CDC, APHIS, and the service IGs. For example, in addition to service IG inspections, DOD entities undergo Biosurety Management Reviews (SMR) and Biosurety Staff Assistance Visits (SAV). The SMRs are meant to verify the entity is managing its biosurety program according to standards. They allow the command to see how the surety program is being managed and where there may be deficiencies. According to Army lab officials, however, these inspections tend to be identical to service IG biological surety inspections. SAVs are an opportunity for the command to assist in fixing deficiencies or other lacking areas, and usually take place before another major inspection. SAVs are not biosurety inspections, nor are they required under DOD regulations. However, SAVs tend to be treated similarly to service IG inspections, according to Army lab officials. While SMRs and SAVs may be handled like an inspection with written reports of perceived deficiencies for which the entity makes a formal response and corrections for the identified deficiencies, DOD officials noted that entities may be self-imposing requirements or practices not required by regulation or the inspecting agency. As a result, these internal reviews can represent an additional area of overlap for DOD-owned and DOD-operated entities. For example, as shown in table 5, an Army entity underwent eight inspections or reviews, five of which were for compliance with select agent regulations between fiscal years 2009 and 2011. According to an official of this particular Army entity, many of the major command reviews and visits use up time and resources, fixing issues that are not value-added, or that are not looked at by other major inspections. In many cases, inspectors are focusing on minor issues, leaving the laboratory open for larger deficiencies on the major inspection. According to the DAIG Chief of Technical Inspections, a major cause of overlapping internal inspections is that there is no single entity within the DOD overseeing or coordinating inspections and that each service validates compliance very differently. The Army requires each major command to have its own internal biosurety team and, because the entities feel they should be prepared for higher-level reviews, the teams conduct inspections in preparation for higher-level inspections. For Army labs, the DAIG inspects an entity every 2 years, an internal command surety team, such as ATEC, conducts an SMR every 2 years (alternating with the DAIG inspection), and SAV reviews may be conducted before any inspection (DOD or otherwise). In an example of an extreme case, an Army select agent laboratory that conducts recombinant DNA research for DHS-funded projects and frequently transfers select agent materials to collaborators could theoretically be inspected by CDC/APHIS, DAIG, NIH, DHS, and DOT all within the same year. This could significantly hinder critical research productivity at the inspected laboratory because of the time dedicated to inspections, according to the report of the Working Group on Strengthening the Biosecurity of the United States. This is somewhat in contrast with inspections of the two Navy and one Air Force entities. For example, while the Navy can perform SAVs prior to inspections, they are not required like Army’s SMRs. In addition, rather than conducting its own inspection, the Air Force accepts the CDC inspection results and adds its own PRP review. This limited coordination among the inspection agencies was noted in the Working Group report. The costs of overlapping federal inspections and effects on lab operations are difficult to quantify because (1) agencies and entities generally do not track the costs or effects of inspections and (2) some costs are not quantifiable. Nevertheless, the costs and effects on lab operations are significant when considering (1) the cost of inspections to federal agencies, (2) both the quantifiable and nonquantifiable costs to the entity, and (3) surveyed entities’ perceptions of the negative effects of overlapping inspections on lab operations. Although we could not quantify the portion of federal and entity costs directly attributable to overlap, we could quantify the costs of inspections in general. For example, we estimate that for fiscal years 2010 through August 2011, individual agencies’ total inspection costs ranged from approximately $22,400 to over $900,000, according to agency data on the hours spent on inspection activities, inspector compensation (salaries and benefits) per labor hour, and travel. The approximate overall federal cost for fiscal year 2010 and 2011 inspections was over $2.1 million dollars. On average, the entity costs per inspection were nearly $15,000 and 380 hours in staff time, according to our survey. The quantifiable cost of an inspection to a select agent entity depends on the number of laboratories and select agents, the complexity of the entity’s mission, its location, and whether it has a history of problems or violations of select agent regulations. In inspections of larger entities, there are higher inspection costs and a greater likelihood for overlap. The costs of overlap are therefore most likely higher as well. Entities also reported moderate to significant nonquantifiable costs of inspections when it comes to loss of productivity and delays in research. While inspections can help entities correct deficiencies, improve inventory management and accountability, and justify the need for resources to improve operations, most surveyed entities reported that overlapping inspections have negative effects on lab operations. According to surveyed entities, overlapping inspections negatively affected lab productivity, staff morale, available time to complete research, and the research schedule. And according to at least one-fifth of surveyed entities, overlapping inspections negatively affected the physical viability of inventory, staff retention, and competitiveness for research funds. Because many of these entities are federal laboratories or are funded through federal grants, these quantifiable and nonquantifiable costs are passed on to the federal government. Obtaining an accurate and complete picture of the costs of multiple inspections is difficult because entities generally do not track the costs of inspections and some of those costs are nonquantifiable. Nonetheless, federal agencies do incur quantifiable costs, including salaries, travel, and training of inspectors, and must purchase inspection equipment and pay staff to engage in inspection activities as opposed to research or other routine activities. Entities may also incur nonquantifiable costs of multiple inspections, such as loss of productivity, delays, and decreased time available to complete research. In addition, because many of these entities are federally owned or funded, some portion of this cost is passed on to the federal government. These costs are affected by the number of inspectors, the time spent on an inspection, and the size of the entity being inspected. The larger the entity—in terms of laboratories, staff, and select agent research—the greater the cost of inspections. Given that inspections cost more for larger entities and overlap occurs more often for larger entities, the cost of overlap is greater than it would be if it were evenly distributed across entities of various sizes. The approximate direct federal cost for fiscal years 2010 and 2011 inspections was over $2.1 million, ranging from approximately $22,400 at DOT to over $900,000 at CDC, according to agency data on the hours spent on inspection activities, inspector compensation (salaries and benefits) per labor hour, and travel. Specifically, APHIS’s total labor, travel, and other costs for inspections were $265,792; DOD’s costs were $697,744; DOT’s costs in 2010 were approximately $22, 444; and CDC’s costs were $903,475 (see table 6 for agency inspection costs). DHS estimates their inspection costs at about $250,000 for the 2-year period. Although we did not estimate indirect costs, the federal government also incurs costs from inspections because many select agent entities are either federally owned or funded. Consequently, the costs to entities, described in the sections below, also accrue to the federal government. The cost of inspections to entities is also difficult to accurately determine because entities generally do not track inspection cost. However, according to focus group participants and entities we surveyed, entities do incur quantifiable and nonquantifiable costs with each inspection, some of which can be significant. Focus group participants and surveyed entities reported quantifiable costs, such as purchasing inspection equipment and salaries for staff involved in preparing for, carrying out, and responding to an inspection. Entities also reported less-easily measured, nonquantifiable costs, such as loss of productivity, delays, and decreased time to complete research, as well as loss of specimen viability from repeated thawing and freezing. Staff time and lab resources are required for inspections, and that burden is increased with overlapping inspections. The actual cost of an inspection to a select agent entity will depend on the number of laboratories and select agents, the complexity of the entity’s mission, its location, and whether it has a history of problems or violations of select agent regulations. However, because overlap in inspections tends to occur more often for larger entities, the overall costs of overlap for these laboratories are likely higher as well. Entities that experienced an overlapping inspection spent, on average, 380 hours and nearly $15,000 in staff time to engage in a federal inspection, according to survey data on the number of hours spent to prepare for, carry out, and close out inspections and the hourly salaries of laboratory staff involved (see table 7 for average costs across occupational groups). See app. I for an explanation of how we calculated these costs. Much of the time spent on inspections takes place in the preparation phase, according to focus groups of lab staff. For example, staff from Army laboratories noted that there is a period of 3 to 4 months of intense preparation for DAIG inspections and months of follow-up. During the preparation phase, lab staff perform numerous activities, including updating standard operating procedures and other documents and records; verifying inspectors’ health records and clearances; turning in unused equipment; holding meetings to prepare for the inspection; checking chemicals and agents in inventory; checking lab equipment; and inspecting, cleaning and painting floors, walls, and desk space. According to some focus group participants, in the weeks preceding an inspection, research is suspended and all staff time is directed toward inspection efforts. Scheduling for inspections can also create conflicts because experiments must be scheduled around the inspection, and key staff must be available during the inspection, regardless of personal plans or schedule. During the inspection, lab staff are involved in activities that take time and resources. For example, staff conduct safety training for inspectors (how to wear safety suits and respirators and blood-borne pathogen and internal requirement training), issue safety equipment to the inspectors, clear inspectors through security, and provide escorts for inspectors. Safety equipment, such as personal protective equipment (PPE), gowns, and footies are provided to inspectors at the entities’ expense, and these costs increase with larger inspection parties and when there is overlap because the equipment can not be shared or reused. Staff time for escorts can be significant in some cases. For example, Army lab inspection teams can have as many as 20 personnel and stay as long as 2 weeks. According to one Army laboratory, it spent more than 7,305 labor hours in activities related to preparing for, executing, and responding to six inspections during 2009 at a cost of $350,640. During 2010, the laboratory estimated it spent 4,082 hours on these activities for three inspections, at a cost of $195,456. In addition, when joint inspections take place, the size of the inspection team may be too large for some entities to manage. For example, according to one survey respondent, the entity requested DHS and CDC not conduct a joint inspection because the inspection teams are too large to handle at one time. According to another survey respondent, the respondent’s laboratory had to arrange for additional personnel to escort large teams of inspectors for multiple inspections. Laboratories also spend time and resources in the closeout phase of the inspection. For example, upper- level staff attend meetings to discuss inspection results, and staff time and effort are required to respond to oral and written findings that may change or be added to even after the completion of the inspection. Of particular concern was that frequent inspections keep lab staff in a constant mode of preparing for the next inspection while responding to the last one, inhibiting their ability to respond to inspection findings and conduct research. Laboratories also incur nonquantifiable costs from inspections, which can be exacerbated by overlapping inspections. While these costs tend to be small, according to most surveyed entities, for some they can be moderate to significant when it comes to loss of productivity, decreased time available to complete research, and delays (see table 8). Other costs noted by surveyed entities include stress and anxiety for lab staff, contractor and overtime costs, HVAC and equipment testing costs, and loss of focus and manufacturing time. Focus group participants noted a variety of nonquantifiable costs, such as the loss of productivity, recertifying equipment or bringing it offline for inspection, and loss of agent viability from repeated thawing and freezing of agents. Although only 5 (11 percent) of surveyed entities reported that reduced viability of inventory was a moderate to significant cost, a preliminary study at the Dugway Army Proving Ground on the viability of agent vials that had undergone multiple inspections found reductions in agent viability, with a 100 percent loss of viability in a few cases. While the costs of reduced or lost viability are difficult to determine, a strain that existed in only a single vial would be impossible to replace. Furthermore, loss of agent viability can damage research opportunities and experimental findings. Focus group participants also noted that multiple inspections reduce an entity’s competitiveness for grant money, although most surveyed entities found this a negligible cost. In addition, inspection time must be factored into grant proposals. And because inspections affect the research cycle, the time it takes to complete research and the overall costs of the research increase when entities experience multiple inspections. For example, some participants noted the costs of their entity’s grant proposals are higher than others’ because of the “overhead” costs they must build in for their multiple biosafety and biosecurity inspections. In addition, the time to conduct research is affected when laboratories have to suspend research for inspections. Undergoing multiple inspections exacerbates this problem. Some participants noted that the granting community knows that if laboratories are frequently inspected, the work will not get done by the desired deadlines, affecting the communities’ willingness to award grants to these entities. Inspections are essential in ensuring biosafety and biosecurity requirements are met and SAR regulations are followed. Most surveyed entities reported that multiple inspections positively affect (1) actions to correct deficiencies, (2) coverage in helping to identify problems, (3) the strength of inventory management and accountability, and (4) justification for additional resources to improve operations. However, many entities also noted multiple inspections negatively affect lab operations. In particular, most surveyed entities reported that multiple inspections negatively affect (1) lab productivity, (2) staff morale, (3) the time to complete research, and (4) the research schedule. While some entities reported negative effects to (1) the physical viability of inventory, (2) staff retention, and (3) staff recruitment, a greater majority of entities reported these issues were unaffected by multiple inspections (see table 9 for effects of multiple inspections). Focus group participants also noted some positive aspects of multiple inspections, among them having an “extra set of eyes” or a layer of oversight on laboratory operations to identify areas of need. In addition, inspection report findings, can provide funding justification for resources and staff, validate good laboratory practices, and identify needed quality assurance improvements. However, some focus groups participants also noted that the amount of time spent on inspections slows down the science and that while inspections may help get the laboratory renovated, they do not actually improve the science. Others participants noted that oversight functions in laboratories, such as quality assurance, are growing faster than the research community. Finally, one survey respondent noted that multiple inspections do not allow the laboratory to have the time necessary to implement lessons learned in a meaningful time frame. Actions to reduce the costs and negative effects of overlapping and duplicative inspections include better coordination and greater consistency in the application of standards, according to various experts and surveyed entities. Both the HHS-USDA Trans-Federal Task Force and the Executive Order Working Group on Strengthening the Biosecurity of the United States recommended enhancing the coordination of biosafety oversight activities, including inspections. In addition, our earlier work on select agent laboratories recommended a single coordinating agency as a means to improve coordination. Accordingly, CDC and APHIS have taken steps to better coordinate inspections with other agencies, for example, by increasing the use of joint inspections, signing MOUs for sharing inspection information with other agencies, establishing an inspector training program for federal partners, and developing a common “playbook” for inspection of registered entities. Such coordination efforts are important steps in reducing overlap and duplication. However, MOUs and joint inspections may not fully address the negative effects of overlap and duplication if inspectors are still applying standards inconsistently and preparing separate reports of findings. Specifically, according to surveyed entities, standards must be applied more consistently between agencies and from one inspection to another. It is this inconsistent application of standards that exacerbates the negative effects, including costs, of overlapping inspections. According to surveyed entities, the most effective actions for greater consistency in the application of standards include (1) ensuring inspectors are well trained and experienced, (2) establishing a single set of inspection standards that all agencies accept, (3) providing an opportunity to discuss, clarify, and rebut inspection findings, and (4) training inspectors to one set of standards with requirements for noncompliance findings (see table 10). Well-trained inspectors, who are able to apply consistent standards, would reduce the negative effects of overlapping inspections. In particular, they might reduce overlap by allowing federal agencies to accept each other’s inspection results. Such a result is facilitated by joint training, according to DOD officials. Without a consistent standard, however, highly trained inspectors would not be effective, according to some surveyed entities. In addition, according to most surveyed entities, a 3-year federal inspection cycle for select agent entities was reasonable. Focus group participants also suggested options for minimizing the potential for overlapping or duplicative inspections and the associated burden, such as having a single inspecting agency whose findings are accepted by all other agencies and improving the knowledge and skills of inspectors. In addition to scaled responses, surveyed entities provided written suggestions for reducing the negative effects of multiple inspections. While several entities expressed support for a single inspecting agency or joint inspections, others felt that addressing overlap alone would be insufficient when the same agency applies different standards in each inspection. In support of inspections being conducted by a single agency, entities noted that at least the inconsistent application of standards across agencies and frustration over trying to comply with two different agency regulations would be minimized. Surveyed entities highlighted the inherent conflict that arises when agencies come to different conclusions about an entity’s compliance. For example, one noted that because there is no agency “in complete control,” entities cannot determine which agency is correct when there is conflict in inspection reports. And a DOD entity wondered about the implication for CDC—which approved the SAP registration—if DAIG recommends the shutdown of a lab. While some thought an independent agency or an inspection czar would be a useful neutral party, others expressed concern that such an entity would just add another level of bureaucracy or require unnecessary new legislation. Because of entities’ familiarity with CDC, one noted that they would not want to have a single agency unless it was CDC. However, another noted it was uncertain how useful a single agency with authority to audit would be because there would be a period of mixed messages while each agency has its say. And so far, the entity had not seen government agencies “play well together as a team.” In addition, while some thought joint inspections might address the negative effects of overlap, others thought they would exacerbate the problem because agency coordination—in terms of consistent interpretations and applications of standards—was still a concern. In support of joint inspections, some felt they facilitated agency coordination and reduced duplication. Others noted that joint inspections are a better use of time and resources, reducing disruption and burden. In addition, joint inspections could help bring more uniformity and clarity for the inspection agencies and entities. For example, according to DAIG officials, they have taken steps to address these issues in joint inspections with CDC and APHIS. During joint inspections, the agencies discuss and resolve differences of interpretation so there is one “inspector” face to the inspected facility, and except for major findings, DAIG does not duplicate the findings of the other agency. However, noting recent inspections by two different agencies, one entity reported that having them at the same time would have made the inspection longer and more grueling. Joint inspections also require the entity to provide escorts and enough PPE for everyone entering the facility, a large expense when considering how many PPEs and escorts are needed for some joint inspections. In addition, (1) combined inspections may still result in multiple reports the entity must respond to and (2) having two teams do what a single team can do is still an additional burden on the taxpayer. Nonetheless, the burden that results from inconsistent application of standards could be minimized if agencies (1) conducting joint inspections issued a single report of findings or (2) instead of conducting a separate or joint inspection, accepted each others’ inspection findings in lieu of conducting their own inspections. Regardless of whether a single agency is responsible or joint inspections are conducted, numerous respondents focused on the need for consistent application of standards by well-trained inspectors. Respondents’ concerns focused on the inconsistent application of standards that can occur, not only between agencies, but from one inspection to another. While the problem may be exacerbated by overlapping inspections, the burden results from inconsistency, not overlap per se. For example, one entity noted that the biggest problem with multiple inspections by different agencies is that each has its own agenda, so establishing inspection standards that are accepted by all agencies would have the greatest effect on reducing the negative effects of multiple inspections. Another noted that a common set of requirements and interpretations would significantly reduce variation in compliance assessments. Reflecting on the additional burden to Army laboratories, one entity noted, “Ideally the set of standards that would be accepted would be the same federal standard that all organizations, not only DOD organizations, would be held to. This would not only reduce the burden of multiple inspections but also level the playing field between DOD and academia which would increase collaboration between laboratories.” Some noted that this might facilitate sufficient trust for agencies to accept each other’s inspection results. If all inspections were based on the same standard, with a checklist to keep them consistent, and all inspectors were trained together and required to use the published standard and checklist, the need for multiple inspections would be decreased or eliminated. Despite the support for greater consistency, some cautioned against the application of a “checklist mentality.” For example, one surveyed entity noted that too many inspectors for their select agent program had been rigid and unbending, sticking to a checklist rather than common sense. Training can support efforts to consistently apply standards, while still allowing the flexibility for inspectors to have essential discussions with entities to establish rapport, exchange information, and understand why certain procedures and policies are in place in any given facility. Surveyed entities offered a variety of other suggestions, such as (1) limiting the number of inspectors and days for inspections; (2) streamlining inspections, including the number of inspectors on the team, and coordinating areas of focus within the team to lessen the burden on the laboratories; (3) having an advisory group of ROs to give feedback to the inspection agency, regulatory agency, or GAO; (4) dividing inspection elements among the inspecting agencies, on the basis of their strengths; (5) staffing inspection teams for consistency and historical knowledge; (6) training all inspectors together to help ensure consistent inspections; and (7) giving entities credit for moving in the right direction. While none of the written comments suggested reducing the inspection cycle, almost half of surveyed entities noted that a 3-year federal inspection cycle for select agent laboratories was reasonable. The agencies have taken some steps to address these concerns. For example, after we had initiated our work, CDC and APHIS convened an Interagency Working Group that includes representatives from DHS and DOD, as well as other federal agencies. So that effective oversight can be achieved with minimal disruption, the working group is developing procedures and policies to better coordinate the inspections of entities that are federally owned or funded, improve information sharing between agencies, and implement other activities. For example, the working group has initiated a joint inspection program through which it has, so far, conducted 24 joint inspections. In addition, it has MOUs with DHS and DOD to share inspection data and an inspector- training program to provide the knowledge, skills, and experience to federal agencies to enable them to conduct “internal” inspections of registered entities they own or fund or to conduct joint inspections with CDC or APHIS. Inspections are important for safety and compliance and can help improve laboratory procedures, infrastructure, and security. However, the value of inspections may be diminished when federal agencies are (1) expending resources to conduct the same or similar work and (2) burdening entities with overlapping or duplicative inspections. While one could argue that more-frequent inspections might be necessary to better ensure safety—in particular for larger entities—there is no apparent value-added when specific inspection activities are duplicative and occur, in some cases, before entities have had time to respond to findings from a previous inspection. In addition, most surveyed entities reported a federal inspection schedule of once every 3 years was reasonable; more frequent—especially duplicative—inspections waste federal dollars and can negatively affect lab operations. This effect on lab operations also wastes federal dollars because many of the laboratories are federally funded through grants or appropriations. To improve interagency coordination and reduce the potential for overlap and unnecessary duplication, a single coordinating agency could be helpful. Such a coordinating agency need not be responsible for conducting all inspections. Rather, where agencies can demonstrate that they meet SAR standards for their inspections, their inspections could be used in lieu of other agency inspections. Currently, the primary agencies with regulatory authority, CDC and APHIS, have taken steps toward better coordination through the interagency working group. However, CDC and APHIS have not been officially charged with overseeing or coordinating all inspection efforts and the other federal agencies may still inspect an entity regardless of how recently another agency has conducted an inspection. Moreover, consistent application of inspection standards— both across and within the agencies—is needed to reduce the negative effects of multiple inspections on lab operations. Further, cross-agency training efforts could facilitate consistent learning and application of such standards. Agencies could then better target their inspection time and resources, and entities could better prepare for and respond to federal inspections. In order to eliminate overlapping and potentially duplicative inspections, as well as reduce the burden of such overlap and duplication on select agent entities, we recommend that CDC and APHIS, as the primary agencies with regulatory authority, work with DHS and DOD to (1) coordinate inspections of select agent entities and (2) where possible, use mechanisms such as (a) joint inspections with a single report of findings, (b) acceptance of each other’s inspection results rather than independent inspections, and (c) cross-agency training opportunities to ensure consistent application of biosafety, biosecurity, and biocontainment inspection standards. We provided a draft of this report to HHS, USDA, DOT, DHS, and DOD for review and comment. DOT did not provide any comments. In its written comments, reproduced in appendix III, HHS agreed with our recommendations. HHS noted that DOT, DHS, and DOD have different authorities than the federal select agent program (SAP), as we had outlined in the background section of this report. HHS stated that it already has a number of activities underway related to reducing overlap and duplication in inspections. Specifically, through the working group for Optimizing the Security of Biological Select Agents and Toxins in the United States, HHS’s Federal Select Agent Program has developed procedures and policies to better coordinate inspections of federally owned or funded entities and to improve information sharing between departments and agencies. HHS initiated three programs to accomplish these goals: (1) a joint inspection program, (2) inspection information sharing MOUs, and (3) an inspector training program. HHS has so far conducted 24 joint inspections, signed inspection information sharing MOUs with five agencies, and trained five individuals from DHS and four individuals from DOD. HHS also plans another training session for spring 2013 to educate federal agencies about recent revisions to select agent regulations. HHS also provided technical comments, which we incorporated as appropriate. In its written comments, reproduced in appendix IV, USDA agreed with our recommendations. Although USDA noted that it did not believe its activities were overlapping with DOD and DHS because of differences in inspection authorities and agency missions, it noted that it had a number of activities underway to reduce overlap and duplication in inspections. Along with CDC—USDA’s partner in overseeing the select agent program—USDA noted that it is conducting joint inspections to minimize the number of select agent inspections, signing information sharing MOUs with several agencies to share inspection data, and conducting inspector training programs to provide knowledge, skills, and experience to federal partners. USDA also provided technical comments, which we incorporated as appropriate. In its written comments, reproduced in appendix V, DHS agreed with our recommendations. DHS notes that its Regulatory Compliance Office collaborates with officials and inspectors at CDC and APHIS to plan joint inspections, share information, and consult on inspection findings in order to reduce the burden on inspected institutions. Specifically, DHS signed an MOU with CDC and APHIS that outlines how the parties will coordinate joint inspections in order to reduce the burden on entities and to facilitate coordination of oversight efforts between the agencies. In line with our recommendations, DHS now accepts the results of inspections conducted jointly with CDC or APHIS instead of conducting independent inspections or generating a DHS-specific report of findings. DHS has also taken advantage of cross-agency training to ensure consistent application of biosafety, biosecurity, and biocontainment inspection standards. In its written comments, reproduced in appendix VI, DOD agreed with our recommendations. DOD noted that certain DOD regulations require compliance inspections for areas, such as the personnel reliability program (PRP), that are in addition to the SAR. We recognize some requirements are agency-specific and do not suggest that they be eliminated. We do recommend, however, that the burden on entities be reduced through coordination with other SAR inspection activities. In line with our recommendations, we are encouraged that DOD has several initiatives to coordinate with CDC and APHIS to reduce overlap and duplication of inspections. For example, the Army and Navy have signed information sharing memorandums of agreement and understanding with CDC and APHIS to coordinate inspections of select agent agencies, through which they conducted the first CDC, DOD, and Army joint inspection in spring 2011. Although the Air Force does not have an MOU with CDC or APHIS for its one facility, it accepts the CDC inspection results and performs its own PRP inspection. During joint inspections, DOD notes that it develops an inspection plan with CDC, APHIS, and the entity, and coordinates with the agencies to minimize entries into inspected areas by inspecting an area together. While two separate reports are written, the DAIG notes that it generally does not replicate CDC findings of noncompliance with SAR standards. Findings are discussed to ensure a common understanding of the standards and what was observed, and to delineate between what is required by the SAR, DOD, and DAIG. While DOD notes that it is unclear whether it would or could accept a CDC or APHIS inspection report in lieu of its own inspection, these joint efforts are still new and the issue will evolve with further collaboration. Additionally, DOD is revising its biological and chemical agent security policies in order to harmonize them with the recently revised SAR. DOD also noted other steps it has taken to coordinate oversight efforts though the Working Group, such as developing the charter and implementation plan for the current federal “joint” inspections process, and developing tools, such as the inspector training program and playbook, to administer the joint inspection process. DOD also made some technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Health and Human Services, Agriculture, Transportation, Homeland Security, and Defense, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2700 or kingsburyn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. To assess (1) the extent of overlap and potential duplication in federal agencies’ inspections of entities that work with select agents, (2) the costs of overlapping federal inspections and effects on laboratory operations, and (3) actions to reduce the costs and negative effects of overlapping inspections, we interviewed agency officials, reviewed pertinent legislation, regulations, and agency documents. Specifically, we spoke with officials from the Center for Disease Control and Prevention (CDC) Division of Select Agents and Toxins, the Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS), the Department of Transportation’s (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA), the Department of Defense (DOD) military service Inspector General’s offices, including the Department of the Army Inspector General (DAIG), Navy Inspector General, and Air Force Inspector General offices. We also spoke with interest groups, such as the American Biological Safety Association (ABSA), and officials at various Army, private, academic, and federal select agent entities. To assess the extent of overlap and potential duplication in federal agencies’ inspections of entities that work with select agents, we (1) identified 374 entities registered to work with select agents between fiscal years 2009 and 2011, (2) analyzed inspection data to identify registered entities that had been inspected by more than one agency on separate occasions in a 2-year period (at any point between fiscal years 2009 and 2011), and (3) surveyed those entities to assess the extent of duplication in inspection activities. In order to operationalize overlap, we relied on our 2011 duplication report, in which we define overlap as multiple agencies or programs having similar goals, engaging in similar activities or strategies to achieve them, or targeting similar beneficiaries. CDC, APHIS, DHS, DOD, and DOT inspections are directed toward a similar goal—assessing biosafety and biosecurity compliance—and are accomplished through a similar strategy, the inspection process. When these inspections targeted the same entities, we counted the inspection as overlapping. Specifically, we analyzed fiscal year 2009 through July 2011 inspection data from CDC, APHIS, DHS, DOD, and DOT, and identified any instances where two different agencies had inspected the same entity on separate occasions within 2 years of each other. We counted joint inspections as a single inspection in our analysis, regardless of the agencies involved. For example, joint APHIS/DHS inspections or CDC/DAIG inspections were counted as one inspection. We chose the 2- year time frame because the Select Agent Regulations (SAR) require certification renewal every 3 years and it is the policy of CDC and APHIS to inspect the entity before recertifying, this represents a somewhat conservative measure of overlap. Because CDC and APHIS manage the Select Agent Program (SAP) jointly and conduct joint inspections, where applicable, we collapse these agencies in reporting on overlap to show where DOT, DHS, or DOD inspections overlapped with either a CDC or APHIS inspection. We also relied on our 2011 report for our definition of duplication—when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. The entities experiencing overlap were identified from the population of 374 entities registered with CDC or APHIS at any point in the 3-year period. Some entities register and de-register each year, so the number of entities registered at any point in the 3-year period will differ from a single point-in-time number of registered entities. Using the number of entities registered at any point in the 3-year period, the extent of overlap as we have defined it is 15 percent. To estimate the likelihood that an entity would experience overlap in federal inspections, we developed a logistic regression model, using the following entity characteristics: the number of laboratories, the highest biosafety level (BSL) of laboratories within an entity, and the number of different select agents the entity works with. The model did not include the number of staff or the number of principal investigators, both of which are strongly correlated with the number of laboratories. We chose to include the number of laboratories in the model because it was a stronger predictor of the likelihood, on the basis of the Wald chi-squared test, than these two characteristics. To assess the extent of duplication in the overlapping inspections, we surveyed the 55 entities that had been inspected by more than one agency in a 2-year period (that is, the population experiencing overlap). The web-based survey of all such entities gathered information on the (1) extent to which there was duplication in specific inspection activities related to the preparation, execution, and close-out phases of the two inspections the entities had undergone, (2) costs and operational effects of inspections, (3) solutions for mitigating the negative effects of multiple inspections, and (4) positive and negative aspects of joint inspections (see app. II for a copy of the survey).The survey was sent to the RO of each entity, and we received an 86 percent response rate, with 47 of the 55 entities responding. To gather preliminary information and develop our survey, we conducted focus groups with about 50 laboratory workers from entities that have experienced federal inspections, including responsible officials, biosafety officers, principal investigators, and technical staff from federal, academic, and private entities that work with select agents. To understand general and agency specific inspection activities, we interviewed inspectors with CDC, APHIS, DHS, DOD service IGs, and DOT and examined inspection files. We also spoke with professional organizations such as the ABSA and the American Society for Microbiology and experts in the safety of select agent laboratories. We took several steps to ensure the reliability of survey responses and the analytical process. To ensure the content and wording of the survey were clear, accurate, unbiased, and nonburdensome, we solicited subject-matter expert reviews from affected agencies (CDC, APHIS, DAIG, and PHMSA), an interest group (ABSA), and an internal (GAO) survey expert. We also pretested the survey with lab staff from a variety of laboratories, including an Army lab, an academic lab, a private lab, and a large federal lab. The survey was deployed through the web, and each respondent had a unique user identification and password. To increase the response rate, follow-up e-mail and telephone calls were made to nonrespondents. Once we had reached an 80 percent response rate, we reviewed responses for (1) item nonresponse, (2) obvious errors or outliers in responses, and (3) “no” answers to question 1. We then followed up with respondents, as necessary, to get additional information and clarification. Changes identified through this process were recorded, and appropriate response cleaning was conducted in the analysis. To eliminate data-processing errors, the computer program that generated the survey results was independently verified by an internal SAS expert who was not involved in the engagement. To assess the costs of overlapping federal inspections and the effects on laboratory operations, we (1) analyzed agency budget data and (2) gathered data from focus groups and our survey of entities on the costs and effects of inspections. Specifically, to assess the costs to the federal government, we requested information on the government and contracted staff involved in inspections and their total compensation (salary and benefits), the number of hours spent on inspection activities, and the associated travel costs for inspections for fiscal years 2010 and 2011. DHS was unable to provide detailed cost data as requested because their inspections of select agent laboratories are provided under a contract that does not identify labor hours, hourly contract rates, salaries, and benefits. But DHS did provide an estimate of the cost of inspections for fiscal years 2010 and 2011. This estimate was based on assumptions about typical staffing levels and costs for the most-common types of inspections, multiplied by the number of inspections conducted in each fiscal year. While DHS believes these estimations include travel costs, they do not include training costs. DOT was not able to provide cost data that include travel, because their system does not distinguish between inspection travel and other travel. As a result, DOT inspection amounts reflect only compensation costs. To assess the reliability of agency data on the costs of inspections, we provided the agencies with a detailed data-collection instrument with specific data requests and precalculated formulas. We reviewed the data for obvious errors, compared these data with cost data from earlier fiscal years, and followed up with officials to discuss data reliability, significant changes across fiscal years, obvious errors, or omissions. We determined these data were sufficiently reliable for our purposes, which was to provide the approximate aggregate federal costs of inspections for the four agencies. To assess the costs to entities, we asked surveyed entities to provide labor and cost information about the most recent inspection identified in our overlap analysis (see app. II, question 6, for specific wording). Specifically, we requested data for five occupational groups that tend to be involved in federal inspections, including (1) ROs and Alternate Responsible Officials, (2) Principal Investigators, (3) owners or controllers, (4) laboratory staff, and (5) support staff (security, administrative, maintenance, and information technology). We asked entities to provide the number of staff involved and the total labor hours spent by each occupational group, as well as the average salary for each group. We used these data to develop overall averages of the personnel and labor costs entities experience as a result of inspections by each one of the five federal agencies in our review. For data reliability purposes, we checked for outliers or obvious errors and followed up where such issues were identified. We calculated the average cost of an inspection by dividing the average (yearly) salary by 2080 to get an hourly rate, and then multiplied that by the number of labor hours provided. We also surveyed entities about nonquanitifable costs and the operational effects of multiple inspections and analyzed comments related to costs from our focus groups of lab staff. To assess actions to reduce the costs and negative effects of overlapping inspections, we interviewed agency officials and interest groups, reviewed key reports addressing the issue, and analyzed comments related to solutions from our focus groups of lab staff. In addition, in our survey, we sought entities’ opinions about solutions for overlapping inspections, including their experiences with joint inspections, which have been proposed as a solution for minimizing inspection duplication. We conducted our work from March 2011 through January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Welcome to the GAO survey on inspections of Select Agent (SA) entities. To complete this survey, you may consult with others in your lab, as necessary. Your responses to this web-based survey are essential in order for us to provide complete and accurate information on these issues to Congress. We will use your responses, together with the responses of other recipients, to develop aggregate statistics, observations, and findings. The identities of individual respondents will not be disclosed in our final report. For questions, contact Jason Fong at fongj@gao.gov or 202-512-xxxx. If you experience technical problems with this web questionnaire, please contact Rebecca Shea at shear@gao.gov or 202-512-xxxx. Important! JavaScript must be enabled on your browser in order to use this web questionnaire. This survey has 7 main sections covering the following key issues: 1. Inspecting Agencies 2. Inspection Activities: a. Preparation b. Execution c. Close-out/response 3. Quantifiable Costs of Inspections 4. Non-quantifiable Costs of Inspections 5 Effects of Multiple Inspections 6. Solutions 7. Joint Inspections Click on "Menu" to the left of this screen to display a navigation panel that can be used to move from section to section. Click on the "Summary" to the left of this screen to print a copy of the survey. AAALAC Association for Assessment and Accreditation of Laboratory Animal Care APHIS Department of Agriculture's Animal and Plant Health Inspection Service BSC Biosafety cabinet BSO Biosafety Officer CDC Department Health and Human Services' Centers for Disease Control and Prevention DAIG Department of the Army's Inspector General DHS Department of Homeland Security DOE Department of Energy DOT Department of Transportation EPA Environmental Protection Agency NIH National Institutes of Health OSHA Occupational Safety and Health Administration PI Principle investigator RO Responsible official (CDC designated) SA Select Agent UN certified packaging designed/tested in accordance with specifications of the United Nations Committee of Experts on the Transport of Dangerous Goods Unless otherwise specified, the questions in this survey apply only to inspections for biosafety and biosecurity as required under Select Agent regulations or by those agencies that own or fund your entities' Select Agent research. Please check the following information for accuracy. If you need to make changes, please do so in the appropriate editable field(s) below. Respondent telephone 1. GAO received inspection data from CDC, APHIS, DHS, DOT, and DAIG, and identified your entity (lab) as one that has been inspected by more than one of these agencies within the past 2 years. Specifically, these data indicate your lab was inspected by the following agencies in the past 2 years: ______ in ______ and ______ in ______. Is this correct? 1a. If the information above is not correct, please note the necessary corrections in the space below. 2. States, local governments, accrediting bodies (such as AAALAC and the Joint Commission), military organizations (such as the Medical Command and Army Material Command), and other federal entities such as NIH, OSHA, EPA, and DOE might also inspect SA registered labs. If your lab was inspected in the past 2 years by any other entities (for SA compliance or other reasons), please provide the name of the inspecting entity and inspection date. 2a. Other inspecting entity and date of inspection: 2b. Other inspecting entity and date of inspection: 2c. Other inspecting entity and date of inspection: 2d. Other inspecting entity and date of inspection: 2e. If you would like to describe any areas of overlap between these "other" inspections and the inspections your lab has received from CDC, APHIS, DAIG, DHS or DOT, please do so in the space below. Inspections can be seen as taking place in 3 broad phases--(1) preparation, (2) execution, and (3) closeout/response-- with specific activities occurring in each phase. 3. Thinking about your lab's inspections by ______ in ______ and ______ in ______, please indicate whether you performed the following preparation activities as part of your routine activities, specifically for each inspection, or not at all. Please note: not all listed activities are required as part of routine activities or for inspections. 3a. Update and/or confirm records related to Select Agents and Toxins are current and accurate (e.g., records for inventory, training, security, biosafety, etc.) Did not perform Not applicable Don't know 3b. Prepare and send requested documents to inspecting agency (hardcopy or electronic) Don't know 3c. Prepare all needed documentation for inspectors to review during visit (hardcopy or electronic) Did not perform Not applicable Don't know 3d. Revise and update written safety and security plans (e.g., biosafety, incident response plans, cyber, personnel and physical security) Don't know 3i. Decontaminate and prepare labs (i.e., go from a hot lab to a cold lab) Did not perform Not applicable Don't know 3j. Check equipment calibration, and operation (e.g., pressure monitors, autoclaves, BSCs, etc.) Did not perform Not applicable Don't know 3p. Check Hazmat training records/documents concerning transportation responsibilities are current and available (e.g., general security and safety awareness, function specific) Don't know 3r. Other (describe below) Don't know 4b. Train inspectors on the use of Personal Protective Equipment (PPE) Did not perform Not applicable Don't know 4c. Train inspectors on other issues (e.g., entry/exit, lab hazards, decontamination requirements, emergency procedures, alarm sounds, etc.) Did not perform Not applicable Don't know 4g. Participate in interviews with inspectors (to answer inspector questions and explain lab procedures and operations) Did not perform Not applicable Don't know 4h. Perform safety or security demonstrations (e.g., emergency response walk-through) Did not perform Not applicable Don't know 4i. Other (describe below) 5. Thinking about your lab's inspections by ______ in ______ and ______ in ______, please indicate whether you performed the following close- out/response activities specifically for each inspection. Please note: not all listed activities are required for all inspections. 5a. Hold close out/exit conference with inspecting agency (where agency tells the entity the results of the inspection in broad terms) Don't know 5g. Bring equipment back online (e.g., HVAC, biosafety cabinets) Did not perform Not applicable Don't know 5h. Other (describe below) 6a. Responsible Official (RO) and Alternate Responsible Official (ARO) following areas as a result of the inspection by ______ in ______? 7a. Loss of productivity 7b. Delays in funded research 7c. Dollars lost in funded research 7d. Decreased time to complete research 7e. Reduced viability of inventory 7f. Reduced competitiveness for research 7g. Other costs 1 (describe below) 7h. Other costs 2 (describe below) 7i. Other costs 3 (describe below) Other costs of inspections #1 Other costs of inspections #2 Other costs of inspections #3 8. If you would like to provide additional information about the costs of inspections, or context for your responses, please do so in the space provided below. 9. Thinking about the overall impact of having more than one federal inspection within the past 2 years, has having multiple federal inspections positively or negatively affected your lab operations in the following areas? 9a. Competitiveness for research funding 9c. Time to complete research 9e. Physical viability of inventory (i.e., resilience of the biological sample) 9i. Strength of inventory management and 9j. Actions to correct deficiencies 9k. Justification for additional resources to 9l. Ability to have specialized inspections in which agencies focus on different areas (e.g., biosafety, security, animal husbandry, SA transport) 9m. "Coverage" in helping to identify problems (i.e., more than one set of "eyes on the problem") 9n. Other effect (describe below) 9o. Other--describe 10. If you would like to provide context for your responses or additional information about the positive or negative effects of multiple federal inspections, please do so in the space provided below. 11. In your opinion, what is a reasonable federal inspection cycle for select agent laboratories? 1. 2. 3. 4. 5. 6. More than once per year Once per year Every other year Every three years As needed Other 11a. Please explain your response. 12. In your opinion, how effective would the following actions be in reducing the negative effects of multiple federal inspections? 12a. Conducting joint inspections 12b. Establishing an inspection czar (i.e., an individual who directs and coordinates SA inspection programs and strategy) 12c. Designating a single agency with 12d. Establishing a single set of inspection standards that all agencies accept (in areas where authorities overlap) 12e. Inspecting to requirements rather than 12f. Training inspectors to one set of standards with requirements for non- compliance findings 12g. Ensuring inspectors are well 12h. Training lab staff about the different agency missions and purposes for inspecting 12i. Providing an opportunity to discuss, clarify, and rebut inspection findings 12j. Other (describe) 12k. Other--describe 13. Please explain why you think the actions above would or would not be effective at reducing the negative effects of multiple federal inspections. 14. If you would like to provide additional information about solutions for reducing the costs of multiple inspections, please do so in the space provided below. 15. Has your lab received a joint inspection (e.g., CDC/APHIS, CDC/DAIG, CDC/DHS, APHIS/DHS inspect concurrently)? 1. 2. 3. Yes No (GO TO QUESTION 16) Don't know (GO TO QUESTION 16) single agency inspections? 15b. What are the negative aspects of joint inspections compared to single agency inspections? 16. If you would like to clarify any of your responses to this survey, or comment on any other related topic, please do so in the space below. 17. If you have completed the questions in this survey, please move the check to the "Completed" button below. (Your answers will not be used until you have checked "Completed." 1. 2. You may view and print your completed survey by clicking on the Summary link in the menu to the left. When you are done, click on the "Exit" button below to exit the survey and send your responses to GAO. Thank you for your help. In addition to the contact named above, Sushil Sharma, Assistant Director; Rebecca Shea; Jason Fong; Elaine Vaurio; Jim Ashley; and Laurel Rabin made key contributions to this report. | Between 2009 and 2011, there were roughly 374 entities across the United States conducting research with select agents such as anthrax, which have the potential to threaten health and safety. Inspections are one means of ensuring safety and compliance with regulations. However, several federal agencies--CDC, APHIS, DOT, DHS, and DOD--conduct such inspections, creating significant potential for overlap and duplication of effort. In this context, GAO was asked to assess (1) the extent of overlap and potential duplication in federal inspections of select agent entities, (2) the costs of such overlap and effects on laboratory operations, and (3) actions to reduce the costs and negative effects of any overlap. To answer these objectives, GAO analyzed agency data, surveyed entities, held focus groups with lab staff, and interviewed agency officials. About 15 percent of entities registered to work with select agents were subject to inspection overlap (multiple federal agencies inspecting within a 2-year period). Entities experiencing overlap tended to be larger ones, with more laboratories, principal investigators, and staff. Although there was overlap between Department of Transportation (DOT) inspections and those of the Centers for Disease Control and Prevention (CDC) and the Animal and Plant Health Inspection Service (APHIS), they were generally not duplicative because specific inspection activities tended to differ, according to GAO's survey of entities experiencing overlap. For example, DOT inspections tended to focus on transportation issues, such as checking hazardous materials and transportation security plans, rather than general biosafety issues. The Department of Homeland Security (DHS) and Department of Defense (DOD) inspections, however, tended to be more duplicative with those of CDC and APHIS. For example, both review the same documents, require safety and security demonstrations, conduct inventory inspections and personnel interviews, and provide corrective action plans. While inspections are important for safety and compliance, there is no value added when federal agencies are expending resources to conduct the same work and, in some cases, reinspecting before entities have had time to respond to findings from a previous inspection. The costs of overlapping federal inspections and effects on lab operations are difficult to quantify because agencies and entities generally do not track them and some costs are not quantifiable. Although GAO could not quantify the portion of federal and entity costs directly attributable to overlap, it could quantify the costs of inspections in general. According to agency data, the approximate overall federal cost for fiscal years 2010 and 2011 inspections was $2.1 million dollars. On average, the entity costs per inspection were nearly $15,000, and staff time per inspection was 380 hours, according to the GAO survey. While surveyed entities reported that inspections can help correct deficiencies and improve accountability, most reported moderate to significant nonquantifiable costs of inspections due to loss of productivity and delays in research. In addition, according to surveyed entities, overlapping inspections negatively affected lab productivity, staff morale, available time to complete research, and the research schedule. Because many of these entities are federal laboratories or are funded through federal grants, these costs are passed on to the federal government. Actions to reduce the costs of overlapping and duplicative inspections include better coordination among federal agencies and greater consistency in the application of standards, according to various experts and surveyed entities. CDC has taken actions to better coordinate inspections with other agencies, for example, by increasing the use of joint inspections. But such actions, including joint inspections, do not fully address the negative effects of multiple inspections if agencies apply inconsistent standards and develop separate reports of findings. Well-trained inspectors, who apply consistent standards, are also needed. Collectively, these actions would reduce the negative effects of overlap and duplication and could increase agencies' acceptance of each other's inspection results. GAO recommends that CDC and APHIS work with DHS and DOD to coordinate inspections and ensure consistent application of inspection standards. HHS, USDA, DHS, and DOD generally agreed with our recommendations and noted various actions they have already taken, or plan to take, to coordinate inspection efforts. |
The reports of sexual misconduct at Aberdeen Proving Grounds led the Secretary of Defense to establish the Federal Advisory Committee on Gender-Integrated Training and Related Issues and to ask the Defense Advisory Committee on Women in the Services to meet with trainees and trainers. These incidents also prompted the Secretary of the Army to establish the Senior Review Panel on Sexual Harassment. In November 1996, the Secretary of the Army established the Senior Review Panel on Sexual Harassment. The panel’s mission was to make recommendations to improve the human relations environment in which soldiers live and work, with the specific goal of eradicating sexual harassment in the Army. The panel consisted of seven members, including two retired general officers recalled to active duty, two active duty general officers, a senior noncommissioned officer, and two DOD civilians. The Senior Review Panel forwarded its report and recommendations to the Secretary of the Army in July 1997. It included 40 recommendations, of which 14 dealt with training and related issues. In June 1997, the Secretary of Defense established the Federal Advisory Committee on Gender-Integrated Training and Related Issues. The Secretary of Defense established the Committee as a result of the sexual misconduct incidents at Aberdeen Proving Grounds. Former Senator Nancy Kassebaum Baker chaired a panel of 11 that included civilians, retired officers, and a retired senior noncommissioned officer. The Secretary directed the Committee to assess the training programs and policies of the Army, Navy, Air Force, and Marine Corps and make recommendations to improve initial entry training. The Committee issued its report to the Secretary of Defense on December 16, 1997. It made 30 recommendations covering the full cycle from recruitment through basic and advanced training. The Defense Advisory Committee on Women in the Services (DACOWITS) has been advising secretaries of Defense since George Marshall established the Committee in 1951. DACOWITS, which consists of 30 to 40 civilians, makes recommendations to the Secretary on the roles of women in the Armed Forces and on quality of life issues affecting readiness. As part of its mission, DACOWITS members conduct annual visits to selected Army, Air Force, Navy, Marine Corps, and Coast Guard installations, both here and overseas. These visits serve two purposes: (1) to provide the Secretary of Defense with insight into the thoughts and perceptions of servicemembers in the fleet and the field and (2) to determine what issues DACOWITS will concentrate on in the future. In November 1996, the former Secretary of Defense requested that DACOWITS visit training installations to meet with trainees and trainers in the training environment. In February 1997, the current Secretary of Defense endorsed the request. DACOWITS provided a report to the Secretary of Defense summarizing these visits. In its report, DACOWITS recommended continued visits to training installations, but made no recommendations on military training. The Army’s Senior Review Panel on Sexual Harassment formed four teams, one to review Army policies and three for data collection. Each field team consisted, on the average, of six military personnel and one civilian. The Chair, the Vice-Chair, or the Deputy Assistant Secretary of the Army (a member of the panel) accompanied each field team during their visits. Other panel members traveled with the teams as often as possible. Visits lasted 1 to 4 days depending on the numbers of participants in the various activities. Before the visits, the participants for the individual interviews, focus groups, and survey were selected and scheduled. Generally, the visit started with a briefing to present the purpose of the activity and a description of the team’s data collection efforts. Next, the team divided into smaller groups to conduct individual interviews, conduct focus groups, or administer surveys. These activities ran concurrently and team members rotated to different activities at different times. Visits ended with a briefing providing commanders the opportunity to begin corrective actions. Panel members and the working group collaborated in writing the panel’s report. Once a near final draft was generated, the panel members met for a final review and agreement on the content. The panel on Sexual Harassment issued its report to the Secretary of the Army in July 1997. The Army’s Senior Review Panel on Sexual Harassment used four methods to collect data: individual interviews, focus groups, surveys, and observations. According to the senior social scientist detailed to the panel, the field teams conducted interviews and focus groups using carefully developed protocols to obtain human relations environment information. Members of field teams conducted individual interviews with 808 military and civilian Army leaders and personnel in Army support groups. Focus groups consisted of randomly selected, single-gender groups of 8 to 12 people organized by ranks or categories. Participants totaled 7,401 soldiers and 1,007 civilians. Facilitators and note takers of the same gender as the groups conducted the sessions. All data obtained through these two activities were entered into a computer for analysis. The working group, which consisted of more than 40 military and civilian personnel, developed main themes or categories and placed the perceptions data under the categories. Data were then analyzed by rank, by gender, and by question. The written surveys addressed leadership, cohesion, and sexual harassment. Field teams administered the surveys to 22,952 servicemembers. Surveys were developed for trainees, trainers, and the general Army population. The working group analyzed survey data using a standard statistical analysis software package. Statistically projectable results appear in the report by question and, in some cases, by gender. Observations were made during visits to barracks and other facilities and to engage in informal conversations with military and civilian personnel, family members, and others. The seven panel members, supported by the working group, collected data at 59 Army installations worldwide selected using a stratified random sampling design. Stratification was based on the type and location of the installation. The study took 8 months to complete and obtained information from over 32,000 Army personnel. The panel’s methodology supported making conclusions and recommendations. Focus groups were used in conjunction with surveys to not only confirm the survey data but also provide texture and perspective to the data. The focus groups were of an appropriate size and were all asked the same questions, in the same order by trained moderators. However, the number of questions asked of many of the focus groups was significantly greater than the five or six questions recommended by focus group literature. For example, the set of questions for trainee focus groups consisted of 15 questions and the set for trainers consisted of 13 questions. Focus group discussions were not tape recorded because it was feared that this would inhibit the participants, but notes were taken by a note taker and were content-analyzed. The notes from each focus group session were destroyed, after the responses were entered in the database and verified for accuracy, to assure that participant confidentiality was maintained. Destroying the original documentation to assure confidentiality is considered an appropriate measure by social scientists. In addition, the completed survey forms were also destroyed to assure participant confidentiality. In volume two, the panel provides an extensive discussion of its methodology. Volume two provides details on how participants were selected, copies of the focus group questions, the surveys, and the interview questions. Results of the surveys were included in the report as well as the most frequently heard responses in the focus groups. Furthermore, a sufficient amount of data are presented in volume one of the report, which outlines the panel’s conclusions and recommendations, to allow the reader to evaluate them. An area of controversy arose because the survey developers did not pretest the survey questions. We were told by the senior social scientist attached to the panel, that tight time frames precluded the panel from carrying out a pretest of the survey form. Normally, a pretest is performed to identify problem questions, problems with language interpretations, unclear instructions, or to determine if there are some questions that respondents will refuse to answer. In this instance, the survey form contained six questions that some respondents in early administrations found inflammatory, offensive, and an invasion of privacy and resulted in some refusals to complete the survey. Subsequently, those questions were eliminated and a revised form was used. The data on the six questions were not included in the database, which resulted in an accusation that the panel had eliminated important data from its analysis. The report disclosed the problem and its resolution in the methodology section. We believe that the panel acted responsibly in eliminating the offending questions to avoid having a negative effect on the survey return rate. The controversy, however, demonstrates the importance of pretesting survey forms before conducting a survey. The Federal Advisory Committee on Gender-Integrated Training and Related Issues saw its role as listening to the views of trainees, trainers, supervisors, and service officials and providing the Secretary of Defense with its best judgment about what should be done to improve training. Small teams of Committee members visited 17 training installations and operational units to gather opinions. Most Committee members visited installations from two services. While the Committee Chairman visited installations for all four services, no Committee member or Committee staff member visited all of the installations. Once at an installation, the Committee members followed the same general schedule: reveille, breakfast with new servicemembers, meetings with command officials, and interviews and focus groups before lunch. After lunch with support personnel, the Committee members conducted additional focus groups and interviews. At the end of each visit, they met with command officials to discuss their findings. Installation visits generally lasted 1 day, although visits to basic training sites were 2-day trips. The visits to the training installations and operational units occurred in September and October 1997. The Committee had two public meetings, the first in July 1997 and the second in October 1997. At the July meeting, service representatives provided information on the services’ recruiting and training programs. At the October 1997 meeting, Committee members discussed their observations and agreed to a partial list of recommendations for the report. The Committee’s staff drafted the report based on the discussions they heard during their installation visits and the public meeting, and memorandums submitted by some of the Committee members in preparation for the October meeting. Committee members received the draft report in early December and revisions were made based on their comments. The Committee Chair discussed the report with Committee members in a series of one-on-one telephone calls to arrive at the final recommendations. The Committee’s primary means of collecting information involved focus group discussions. The Committee held 199 focus groups, soliciting opinions from more than 1,000 trainees, 500 trainers, 300 first-term servicemembers, and 275 supervisors at U.S. training installations and operational units. Participants were randomly selected under the supervision of the installations’ inspectors general. Trainees who participated in the focus groups were within 2 weeks of completing their training. Participants in the trainer focus groups were trainers for at least 1 year. First term participants were in their initial assignment and had been on the job between 6 and 18 months. Generally, the Committee met with equal numbers of females and males, although, because of the limited number of female trainers and supervisors, this was not always possible. Focus groups included about 10 to 15 people each and were gender-segregated. All of the focus groups were moderated by Committee members, and generally two Committee members or a Committee member and a Committee staff member attended each session. The Committee members worked from a set of questions tailored for each service and each type of focus group. Although the number of questions varied by type of focus group, the set of questions for all basic training focus groups consisted of 20 questions, some of which had multiple parts. While some focus groups were scheduled to last only 30 to 45 minutes, most focus group sessions lasted nearly an hour. Committee members also conducted over 100 interviews with service officials, including commanding officers, inspectors general, company or squadron commanders, and senior noncommissioned officers. They also met with representatives of support groups such as chaplains, equal opportunity officers, medical officers, and legal officers. The value of the information included in the Committee’s report for making conclusions and recommendations is limited because the Committee did not follow recommended focus group methodology. The Committee believed that a more flexible approach to the discussions would enhance the quality of the exchange between the participants and the Committee members. However, the fact that the same questions were not asked of each similar focus group, along with the number of questions, size of the groups, and length of the sessions may have combined to limit full discussion. In addition, the focus groups’ discussions were not systematically recorded. As a result, the extent to which the recommendations are supported by the Committee’s work cannot be assessed. The Committee staff provided the Committee members with questions for the focus groups. However, according to the staff director, the Committee members were told that the questions were guidelines and that they did not have to be asked as written. Because the Committee members had the flexibility to ask any question they desired, the responses should not be compared with each other. Also, the number of questions provided to the panel members were far more than the five to six focus group literature recommends. For example, the staff provided 20 questions, some of which had several parts, for the Committee to ask Army trainees at Fort Leonard Wood, Missouri. Fifteen trainees participated in the 1-hour focus groups at this installation. If the entire hour was spent on the questions, there would have only been 3 minutes spent on each question and only 15 seconds for each participant to respond. We do not believe that would have been enough time for a meaningful discussion of a question. Finally, even if all the questions were asked as they were written, they were not always asked in the same order each time. Social science literature suggests that the same questions asked in a different sequence may result in different responses. The absence of documentation of the comments made in the individual focus groups was the most serious methodological shortcoming. While the Committee members took notes during each focus group, these notes were not made part of the Committee’s records nor were they summarized and included in the report. Without documentation, it is impossible to determine if the Committee’s work supports its recommendations. Also, the lack of documentation prevented the Committee from analyzing the data to know what comments they heard or how often similar comments were made. Knowing how often a particular kind of comment was made and the subgroup of the person who made it are ways of putting the comments in perspective and filtering biases. The report of the Federal Advisory Committee on Gender-Integrated Training and Related Issues does not include a sufficient discussion of the Committee’s methodology and work process. For example, the report states that the Committee conducted discussion groups with randomly selected servicemembers, but it does not explain the random selection process. In addition, the use of terminology such as “randomly selected” implies a level of scientific rigor that was not achieved in this study. The report does not identify the make-up of the discussion groups, discuss what type of data analysis was done or not done, or mention any limitations with the data. Limitations that we believe should have been mentioned are that the report was based on opinions and the results cannot be generalized to the entire military training population. Also, the report often presents opinions in a manner that they can be misinterpreted as facts based on empirical data. For example, the report says that the Committee members observed that integrated housing is contributing to a higher rate of disciplinary problems, but, according to the Chairman, the Committee did not obtain any data to support this statement. In addition, the report contains many statements that include words like “most”, “many”, and “majority”. These words lead a reader to believe that the Committee counted responses to particular questions or polled the focus group participants. The Chairman said that the Committee does not have quantitative data. The report also does not explain the process the Committee used to formulate its recommendations. Although the Committee held a public meeting in October 1997 after its installation visits had been completed, the recommendations on separate barracks for male and female recruits and on the organization of gender-segregated platoons, divisions, and flights were not made until after that meeting. Furthermore, those recommendations were not discussed by the Committee as a whole, but rather in a series of calls to individual Committee members. The mission of the DACOWITS effort was to provide the Secretary of Defense with an overview of broad issues raised by trainees and trainers of both genders throughout initial entry training. A secondary purpose was to help determine what issues DACOWITS would concentrate on in the future. The Chair and the Executive Director of DACOWITS selected seven members (all were women) to visit training installations. Members were selected based on their DACOWITS experience and the quantity and quality of their previous installation reports. Typical visits were conducted by one DACOWITS member and lasted 2 days. Visits began with a briefing by the commanding officer about the school and its mission, followed by trainee, trainer, and supervisor focus groups. At the end of a visit, the DACOWITS member met with command officials to share the results of the focus groups. Reports, summarizing the most frequently heard comments from the various focus groups, were written at the conclusion of each visit. In addition, the seven members met at DACOWITS’ 1997 fall conference to discuss the results of their visits. Using the reports and the conference discussion, the 1997 DACOWITS Chair wrote the report. The report was released by the Secretary of Defense in January 1998. DACOWITS used focus groups as its primary means of data gathering. Overall, they solicited the opinions of over 1,200 trainees, trainers, and supervisors in the Army, the Navy, the Marines, the Air Force, and the Coast Guard in focus group discussions at 12 gender-integrated training schools at 9 installations. The schools included enlisted basic, intermediate, and advance training, and officer advanced training. Most focus groups were gender-segregated and trainees, trainers, and supervisors were in separate focus groups as well. DACOWITS requested trainees with at least 40 percent of training completed. Many trainees had completed their training and were awaiting graduation. The groups averaged 20 participants and sessions lasted about 60 minutes, although some were shorter. Before meeting with the Committee members, focus group participants viewed an 18-minute video that explained the mission of DACOWITS and highlighted some of the gender equality issues that DACOWITS had worked on in the past such as sexual harassment, discrimination, child care, and the combat exclusion policy. The video set the stage for the two open-ended questions that all the participants were asked: (1) “How is it going?” and (2) “If you had five minutes to speak with the Secretary of Defense, what would you tell him?” According to the former Chair, DACOWITS uses these questions during all installation visits. Training installation visits took place between July and November 1997. At the conclusion of each visit, a DACOWITS member completed a standardized installation visit report summarizing the most frequently heard comments from the focus groups. The comments included in these reports were entered into a computer and sorted by frequency across the services as well as by individual service. Issues were included in the report to the Secretary based on frequency. The individual installation visit reports support the opinions and perceptions that appear in the report to the Secretary of Defense. Some focus groups may have been too large or may not have had enough time to allow ample participation by most of the participants. The literature suggests that focus groups should be no larger than 12 participants. During the DACOWITS visits to the training schools, some groups were as large as 20 participants. Groups larger than 12 usually do not allow sufficient opportunity to actively participate in the discussion and are more difficult to manage. Also, the majority of the sessions were about an hour long and some ran for only 45 minutes, about half the time recommended by focus group literature. DACOWITS used two questions to generate discussion. However, time may still have been a problem, since the questions were very open-ended and could be taken in virtually any direction by a participant. This would likely have the effect of increasing the amount of time needed as each participant not only answered the discussion questions, but also reacted and responded to the issues raised by others. DACOWITS did not document the individual focus groups as recommended by focus group literature. Instead, DACOWITS members prepared installation visit reports which summarized the opinions they heard most frequently. While the installation reports document the work performed and the issues surfaced during the training installation visits, they do not capture enough information about the discussions in each focus group to be really useful. For example, they do not provide enough information on the rank or gender of the groups that raised the issue which would help put the comments into perspective. As we stated earlier, all of the DACOWITS members making installation training installation visits were women. Some focus group literature suggests that the gender of the moderator and the gender of the focus group should be the same, particularly when the issues being discussed are sensitive or have a direct bearing on the opposite sex. Also, some focus group literature suggests that men are more likely to tell a woman moderator what they think will impress or please her rather than what they think. The use of female moderators for male focus groups, in conjunction with the women’s advocacy impression that the video is likely to have conveyed, may have made some males hesitant to raise issues or perceptions that might be construed as anti-female. Because DACOWITS did not document each of its focus groups it is impossible to determine if the use of women moderators with all-male focus groups had an effect on the responses of the male participants. The DACOWITS report provides some methodological information for the reader, but does not provide some key information. First, the report does not provide any details on how the Committee members documented the focus groups. Second, the report does not clearly explain the process used by DACOWITS to determine what issues would be included in the report. Third, while the report provides some detail about the make-up of the focus groups it does not describe how the focus group participants were selected. It should be noted however, as recommended by focus group literature, the report clearly states its two major limitations: (1) the opinion and perception information included in the report has not been independently validated or confirmed and ( 2) the Committee did not visit any gender-segregated training facilities. Also, in accordance with the limitations of the methodology, the DACOWITS report made no conclusions or recommendations on military training. We provided a draft of this report to DOD, the Chairman of the Federal Advisory Committee on Gender-Integrated Training and Related Issues, and the former Chair and Military Director of DACOWITS for comment. We discussed our report with Department of the Army officials, who concurred with our observations on the Army’s Senior Review Panel on Sexual Harassment. We also discussed the draft report with the Executive Director of the Federal Advisory Committee on Gender-Integrated Training and Related Issues who suggested some clarifications to the report, which we considered and made as appropriate. In addition, we discussed the draft with the military director of DACOWITS, who stated that DACOWITS does not aim to meet the standards of academic research but instead uses focus groups to collect opinions and identify issues for further study. Finally, we discussed the draft with the former Chair of DACOWITS who suggested some technical corrections which we made as appropriate. We reviewed the reports from the Army’s Senior Review Panel on Sexual Harassment, the Federal Advisory Committee on Gender-Integrated Training and Related Issues, and DACOWITS. We reviewed literature on the conduct and use of focus groups, since that was a common methodology across the three studies. We focused on the methodological information provided in the reports, including any limitations on the use of the information. We reviewed supporting documents to determine if the evidence collected supports making conclusions and recommendations. We did not evaluate the validity of specific conclusions and recommendations made by any of the studies. We met with the Chairman and Executive Director of the Federal Advisory Committee on Gender-Integrated Training and Related Issues, the former Chair and Military Director of DACOWITS, and with the senior social scientist of the Army’s Senior Review Panel on Sexual Harassment to thoroughly explore the approach and methodology used in these efforts. Our review was requested by the former Ranking Minority Member of the House National Security Committee and Mr. Meehan. We are addressing the report to the current Ranking Minority Member of the House National Security Committee, Mr. Skelton, as a courtesy. We are addressing this letter to Senator Robb because it is related to other work on gender issues in the military that we have undertaken at his request. We conducted our review in February and March 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Air Force, and the Navy; the Chairman of the Joint Chiefs of Staff; and the Commandant of the Marine Corps. We will make copies available to any other interested parties. The major contributors to this report were Carol R. Schuster, William E. Beusse, Carole F. Coffey, George M. Delgado, and Kathleen M. Joyce. If you or your staff have any questions concerning this report, please call me on (202) 512-5140. Focus groups are carefully planned small group discussions involving people with similar characteristics who are knowledgeable about an issue but do not know each other well. The views expressed in focus groups are not necessarily representative of a population and statistical estimates cannot be derived from the results. Furthermore, focus groups cannot be used to determine the extent of a problem. Focus groups produce qualitative data that provide insights into attitudes, perceptions, and opinions of the participants. They are most often used before, during, or after quantitative research procedures such as surveys. For example, focus groups can be used before a survey is undertaken to help a research team learn about the target audience or determine the appropriateness of the questionnaire. Focus groups are often used with surveys to confirm findings and to obtain greater breadth and depth of information. Finally, focus groups are often used as a follow-up to surveys to help interpret responses. On occasion, focus groups are used alone when opinions and perceptions are more important than how many people hold such views. The size of the focus group is an important, but often overlooked, element of a successful group discussion. The literature on focus groups suggests that an appropriate size for a focus group is 6 to 12 people. A focus group with fewer than six participants sometimes has problems with productivity because the group has fewer experiences to share. Also, small groups can be more easily affected by people who know each other, by uncooperative participants, or by participants who view themselves as experts on the topic. Groups that have more than 12 people usually do not allow people sufficient opportunity to actively participate in the discussion, making the groups difficult to manage. The composition of the focus group is also important. Participants should share some similar characteristics but be diverse enough to allow for differences of opinions. The topic of discussion and the information to be obtained dictate the types of characteristics shared. However, generally participants should be similar in age, occupation, education, and social class. Focus groups with distinct differences among participants such as trainees and trainers or junior and senior enlisted personnel do not work well because of limited understanding of other lifestyles and situations. Furthermore, some participants may be inhibited and defer to those they believe to be better educated or more experienced or of a higher social class. Sometimes, the gender of participants can affect the outcome of a focus group and some social scientists recommend against mixing genders because men and women tend to perform for each other. When the opinions of disparate groups are needed, focus group literature recommends holding separate groups for each distinct group. Focus group discussions are conducted informally and guided by trained moderators who encourage participants to share their thoughts and experiences. Trained, experienced moderators are critical to the success of a focus group. An unqualified moderator can easily undermine the reliability and validity of focus group findings. Successful moderators are good listeners, who can make people feel relaxed and anxious to talk. Moderators must control a group without being obvious and be aware of time. Since literature suggests that focus groups should be scheduled for 90 minutes and run no more than 120 minutes, moderators need to be able to keep the discussion on track and move the participants from one topic to the next. Moderators should be aware of the influence that they have on the type and amount of data obtained. Moderators must be aware of their own biases that might affect the validity of the data and take care not to provide cues to participants about desirable responses and answers. If dealing with sensitive subjects where views could vary according to factors such as gender or race, it is recommended that the moderator be similar in gender or race to the participants. Finally, moderators must have sufficient knowledge of the topic to put comments in perspective and followup on critical areas of concern. Questions are the heart of the focus group discussion. The literature on focus groups suggests five or six questions for a discussion group. The questions need to be carefully thought-out and phrased to result in the maximum amount of information in the limited time available. Questions should not suggest potential answers and yes or no questions should be avoided. Questions should be asked in the same order in every focus group and questions should be sequenced from most important to least important to ensure that the most necessary information is obtained from the participants if time runs out. The sequence is important because the questions may interact with one another to form the stimulus that generates the responses. If the questions are asked in a different order at each focus group, the stimulus is changed and the response will be different. The results of the focus groups’ discussions should be documented on a session by session basis. Focus group literature agrees that the best way to do that is by tape recording supplemented with written notes. However, if a tape recording is not feasible or inhibiting to the participants, note taking can be sufficient provided they are complete enough to be analyzed. A systematic analysis of focus group data is also important. The analysis can be either qualitative or quantitative, but it must be systematic and verifiable. It must be systematic in that it follows a documented step-by-step process and verifiable to permit others to arrive at similar conclusions using available documents and the raw results. Social scientists have noted that there is a tendency for novice researchers to see selectively only those parts of the discussion that confirms their particular point of view. Often, a researcher will go into the discussion with certain hunches of how participants might feel. As a result, the researcher tends to look for evidence to support these hunches and overlook data that present different points of view. A systematic and verifiable process helps researchers in filtering out bias and assuring that they present the data as objectively as possible. Once data are collected and analyzed, the data should be reported and, if appropriate, conclusions and recommendations made. A report should clearly state what the purpose of the study was, what its scope was, how the data were collected and analyzed, and what, if any, significant limitations exist on the data or the use of the data. For example, studies that used focus groups as the primary method of data collection should clearly state that the data being reported is opinion or perception. If the opinions have been substantiated by other types of data, this should be clearly stated in the report. The report should also include the results of the focus groups, and the results should be clearly stated so that a reader can come to the same conclusions as the report writers. The following is excerpted from Army press reports that accompanied the report of the Senior Review Panel’s report on Sexual Harassment in the Army as well as the executive summary of the report: Sexual harassment exists throughout the Army, crossing gender, rank, and racial lines; gender discrimination is more common than sexual harassment. Army leaders are the critical factor in creating, maintaining and enforcing an environment of respect and dignity in the Army; too many leaders have failed to gain the trust of their soldiers. The Army lacks institutional commitment to the Equal Opportunity program and soldiers distrust the equal opportunity program. Trainees believe the overwhelming majority of drill sergeants and instructors perform competently and well, but “respect” as an Army core value is not well institutionalized in the Initial Entry training process. Recommendations of the panel were broad-based and covered a wide variety of Army processes including: leader development, equal opportunity policy and procedures, initial entry training soldierization, unit and institutional training, command climate, and oversight. “The panel studied the full training cycle including recruiting, basic training, and advanced skills training. Its recommendations covered the training cadre, housing of recruits, fitness programs and follow-on advance training. Among the several recommendations made for recruiting, the panel proposed better preparing recruits mentally and physical for basic training. It also recommended ways to improve the training cadre. It recommended that physical training requirements be toughened and made more uniform throughout the services. The panel also suggested that emphasis on discipline be carried over from basic to advance training. The panel recommended that value training be incorporated into all initial entry training programs and that training get more resources. “During visits to training installations, the panel concluded that men and women should be housed in separate barracks and train separately at the operational unit level — the Army platoon, the Navy division and the Air Force flight. In the Marine Corps men and women live, eat, and train separately. The panel recommended that gender-integrated training continue for field training, technical training and classroom work.” “The scope of DACOWITS’ training installation visits included all elements of initial entry training, including basic training, advanced individual training, and officer advanced training. The majority of issues raised by trainees, trainers, and supervisors of trainers were similar across all of the Armed Forces. “The most frequently raised issues by women and men and trainees and trainers alike were artificial gender relationships imposed at training installations, the persistence of gender discriminatory behaviors at many locations; the relationship between trainer attitudes and gender climates; the under valuation of trainers, especially women trainers; the need for greater gender integration to train field and fleet ready servicemembers, the need to increase physical training opportunities and standards; the need to improve screening of new recruits and to harmonize recruiting quality and practices; the under resourcing of training schools and the need to improve support services for women trainees.” The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed three studies on gender-related issues affecting initial entry training in the Department of Defense (DOD), focusing on: (1) how the groups conducted their work; (2) how well the work supported making conclusions and recommendations; (3) the availability of documentation supporting the report; and (4) the extent to which the final report described the study methodology and disclosed limitations. GAO noted that: (1) the Army's Senior Review Panel on Sexual harassment used four methods to collect data: individual interviews, focus groups, surveys, and observations; (2) during its 8 months of work, the panel visited 59 installations worldwide, conducted interviews with 808 military and civilian Army personnel, ran focus groups with over 8,000 soldiers and civilians, and surveyed 22,952 individuals; (3) the use of multiple methods of data gathering, the rigor with which the various methods were conducted, and the publication of the data in the report provides ample support for making conclusions and recommendations; (4) the Federal Advisory Committee (FAC) on Gender-Integrated Training and Related Issues used focus groups as its primary method of data gathering; (5) although FAC conducted over 300 focus groups and individual interviews, their value for making conclusions and recommendations is limited because the Committee did not: (a) systematically collect the same information from all groups; (b) document the information generated in each of the interviews and focus groups; or (c) explain how what was heard in the interviews and focus groups led to their conclusions and recommendations; (6) in addition, the length of the focus group sessions, the number of participants, and the number of questions addressed may not have provided adequate time for full participation of the respondents on all issues; (7) given these limitations, the extent to which the Committee's work supports its conclusions and recommendations cannot be determined; (8) the Defense Advisory Committee on Women in the Services (DACOWITS) also used focus groups of trainees, trainers, and supervisors in the Army, Air Force, Navy, Marine Corps, and Coast Guard to identify what issues concerned women and men at training installations; (9) members of the DACOWITS held focus group discussions at 12 schools at 9 installations in the United States and prepared a summary report of the results at each installation; (10) the DACOWITS Chair used these to prepare a report to the Secretary of Defense that accurately reflected the opinions and perceptions cited in the individual installation reports; (11) the DACOWITS focus groups were: (a) larger than recommended in the literature; (b) were sometimes not long enough to allow meaningful participation; and (c) were not recorded or documented on a group-by-group basis; and (12) the DACOWITS report summarized the opinion and perception data obtained from focus groups; and (13) it made no conclusions or recommendations on military training based on that information. |
EPA defines brownfields as abandoned, idled, or underused industrial or commercial sites where expansion or redevelopment is complicated by real or perceived environmental contamination. Usually, the contamination is less extensive than sites on EPA’s priority list for cleanup. We have reported that liability and other concerns have deterred many potential developers from using brownfields and that, instead, they use uncontaminated sites in suburban areas referred to as greenfields. The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) authorizes EPA to clean up hazardous waste sites and to compel parties responsible for contamination to perform or pay for the cleanups. Developers’ avoidance of brownfields has contributed to a loss of employment opportunities for city residents, a loss of tax revenues for city governments, and an increase in urban sprawl. To encourage more redevelopment of brownfields and promote cleanups, the Congress has considered several legislative proposals, both as separate bills and as part of legislation to reauthorize the Superfund program, that would help address liability concerns and provide economic incentives. In November 1993, EPA introduced the Brownfield Economic Redevelopment Initiative, for which the Outreach and Special Projects Staff—referred to in this report as the brownfield program office—has primary responsibility. This office reports to the Assistant Administrator for Solid Waste and Emergency Response, who also manages the Superfund program. The brownfield initiative is a commitment by EPA to help communities revitalize brownfields, both environmentally and economically, and mitigate potential health risks. EPA has begun four major efforts to implement this initiative: (1) providing grants for brownfield pilot projects for site assessment and cleanup planning; (2) clarifying liability and other issues associated with cleaning up sites to return them to productive use; (3) building partnerships and outreach for brownfield redevelopment among federal agencies, state and local governments, and communities; and (4) fostering local job development and training initiatives related to brownfield activities. EPA funds assessment pilot projects through cooperative agreements with state, local, and tribal governments, which use the funds to assess, identify, characterize, and plan cleanup activities at contaminated sites targeted for redevelopment. In general, an individual recipient can get a one-time grant of up to $200,000. EPA began funding these pilot projects in September 1993. At the time of our review, the agency had used $21 million to fund 121 projects in 41 states, including 45 projects in fiscal year 1997. EPA plans to fund an additional 100 projects in fiscal year 1998. EPA provided financial support to state, local, and tribal governments to help them create revolving loan funds that would provide low-interest loans to public and private entities for site cleanups. Any site that had been formally assessed before October 1, 1995, to characterize the nature and extent of contamination could be eligible for a loan. EPA allocated $8.4 million in fiscal year 1997 funds to 24 state, local, and tribal governments to begin these revolving funds. Because the Congress directed EPA not to use any of its fiscal year 1998 funds on this program activity, EPA is planning to reallocate $35 million it intended to use on revolving loan funds to some of the remaining program categories. EPA provides state and tribal governments with funds to enhance and develop voluntary cleanup programs that states often use to clean up brownfields. States have used fiscal year 1997 funds for such activities as (1) completing regulations for voluntary cleanup programs, (2) purchasing equipment to support program administration, (3) paying the salaries of agency staff to develop program procedures, (4) helping states and tribes to build their own capacity to oversee cleanups, and (5) promoting greater community involvement. In fiscal year 1997, EPA allocated $9.4 million for programs to assist 42 states and two tribal governments; in fiscal year 1998, the agency plans to allocate $15 million for programs to assist all 50 states and more tribal governments. EPA uses funds from the program category for targeted site assessments to pay either its contractors or the states through cooperative agreements to identify the extent of contamination at those sites where the work can be performed faster and more cheaply than if done by the local governments. EPA regions reported that they used $2.3 million in fiscal year 1997 to fund 27 targeted site assessments and that they plan to fund an additional 30 assessments with the $3 million budgeted for fiscal year 1998. EPA enters into environmental job training grants and agreements with educational institutions and professional organizations for (1) environmental curriculum development that incorporates brownfields, (2) community outreach and information dissemination on brownfields, and (3) job training in hazardous waste cleanup and employment assistance at cleanup sites. Since fiscal year 1993, EPA has allocated a small portion of general funds to make five job training awards, three of them in fiscal year 1997. The agency plans to make an additional 10 awards with a $2.8 million increase in Superfund resources that was made available for job training in fiscal year 1998. Through an interagency agreement, the remaining $3 million that had been budgeted for environmental job training has been allocated to the National Institute for Environmental Health Sciences to provide training on such issues as workers’ safety. EPA funds outreach to constituents affected by brownfields; technical assistance to state, local, and tribal governments on brownfield redevelopment; and brownfield-related research. Typically, the agency awards grants and agreements to educational, governmental, research, and community organizations to, among other things, disseminate information and conduct research on issues related to site redevelopment and potential health risks from contaminants. During fiscal year 1997, EPA funded 11 such grants and agreements. The decreased amount from $6.3 million in fiscal year 1997 to $4.5 million in fiscal year 1998 reflects EPA’s plan to fund more agency personnel to manage the increased number of assessment pilots. EPA’s Office of Policy, Planning, and Evaluation awards agreements and contracts to research and community organizations to provide analytical tools and products for urban development and brownfield activities. For example, EPA awarded a $45,000 cooperative agreement in fiscal year 1997 for a 2-day conference and workshop that included some discussion of brownfield issues specifically affecting developers and lenders. The office awarded four agreements totaling $183,000 and five contracts totaling $422,000 in fiscal year 1997. The office plans about the same level of activity for fiscal year 1998. In fiscal year 1997, EPA assigned approximately 33 employees in headquarters and field offices to manage brownfield activities at a cost of $2.5 million; in fiscal year 1998, the agency plans to almost double the amount of funds and increase staff to 57 employees. The managers within the Outreach and Special Projects Staff—referred to in this report as the brownfield program managers—explained that EPA needs more staff to manage the increasing number of grants, cooperative agreements, and pilot projects to state, local, and tribal governments. Although recipients do not have to compete against each other for funds from either EPA’s outreach, technical assistance, and research program category or its job training program category, the agency has established criteria and set up an approval process to award funds for brownfield activities. EPA’s Outreach and Special Projects Staff awarded funds to nonprofit organizations if their unsolicited proposals addressed one of the following four broad criteria:increase community involvement in brownfields; promote the redevelopment of brownfields; provide for site assessment and cleanup; and promote the principal of sustainable development—that future economic well-being depends on the ability to sustain a healthy environment and productive, renewable natural resources. The managers said they often rejected proposals that did not meet at least one of the four criteria, but they could not document the number and type of rejected proposals. According to the brownfield program managers, they used the following process to approve the 24 awards we reviewed. If a proposal met at least one of the four criteria, it went through an internal EPA and, under some circumstances, an external review process. The brownfield program managers first checked their computerized tracking system of all federally funded outreach, technical assistance, research, and job training activities to ensure that the proposal would not duplicate ongoing awards. They then sent various proposals to other EPA offices, such as the Office of Research and Development and the Office of Policy, Planning, and Evaluation, that had conducted similar activities for their concurrence. They also sent proposals to the Office of General Counsel (OGC) to determine if the action complied with existing law, although they were not required to do this to approve an award. Furthermore, they sent certain proposals to other federal agencies, such as the Department of Housing and Urban Development, that had conducted similar activities for review of the proposals’ technical and scientific merit. The brownfield program managers explained that EPA did not use a process whereby organizations had to compete for outreach, technical assistance, research, and job training funds as it used to make funding awards in some of the other brownfield program categories, such as assessment pilot projects. This is because, generally, the organization submitting an outreach or job training proposal serves a unique group of constituents that is affected by brownfields or has unique brownfield expertise. EPA guidance allows the agency to use unique qualifications as a justification for a noncompetitive award. The EPA brownfield program managers maintained that going through the expense of widely publicizing available funding and conducting a competitive process to screen hundreds of applications is not cost-effective, especially given the small amounts of the awards. For example, these managers explained that one recipient, the International City/County Management Association, represents city and county managers nationwide whose jurisdictions are directly affected by brownfields. EPA believes this association could more quickly poll its members to determine what brownfield assistance they need EPA to provide and more quickly disseminate information to them about successful brownfield redevelopment efforts than EPA could. According to the program managers, EPA also provided a cooperative agreement to the Institute for Responsible Management because its director has years of experience in brownfields. They explained that because of this experience, the director can help the pilot communities organize themselves and focus on brownfield cleanup and redevelopment options. The director can also provide research, information, and troubleshooting to these groups as well as document the lessons learned and success stories so other communities can benefit from them. EPA has used the same approval process for the five grants it had awarded for job training at the time of our review. For example, the agency provided funds in fiscal year 1997 to the Hazardous Materials Training and Research Institute at East Iowa Community College District to conduct workshops for community college faculty on how to build environmental curricula for job training, especially relating to cleaning up contaminated sites. According to the program managers, this award was made because of the Institute’s success in developing training programs through awards from EPA’s Office of Research and Development. They said that EPA is now working on a strategic plan for its training activities and will use it to determine whether or not to fund future job training proposals. The 24 awards made since 1993 that we reviewed with brownfield-related activities totaled $9.6 million. These funds came from the allotment to the brownfield program office, the Superfund Trust Fund, and general funds from either the brownfield or other EPA program offices. Recipients used these awards to provide outreach, technical assistance, research, and job training to support both brownfields specifically and Superfund or other programs more generally. We determined that about $3.7 million of these funds were for the following more specific brownfield activities, although some portion of the activities provided by the remaining funds could also indirectly benefit brownfields: issue reports or other documents on redevelopment activities; sponsor forums, conferences, or other meetings to disseminate research regarding brownfield issues and policies; conduct or sponsor workshops on brownfield issues or policies and on developing environmental curricula for job training related to hazardous waste cleanup; conduct research on brownfield and redevelopment issues, such as insurance coverage for entities conducting cleanups; and establish or develop programs to identify barriers to brownfield development. Recipients also used the awards to perform other activities, including the development of educational materials or tools and databases on redevelopment case studies. Appendix II provides a more detailed description of the activities funded under each of the 24 grants and agreements. In conducting these activities, recipients have spent most of the awarded funds on (1) their own personnel costs, including fringe benefits; (2) indirect costs, such as overhead; and (3) contractual services, such as any consultants used. They also have spent smaller portions of their funds on expenses for travel to enable their staff and participants to attend conferences and forums; equipment, such as copying machines; and supplies. Our review of the files for each of the 24 awards and our interviews with various members of the Outreach and Special Projects Staff responsible for managing some of the individual awards showed that project officers were monitoring recipients’ activities. This monitoring consisted primarily of project officers’ making periodic telephone calls to recipients to discuss the status of funded activities, attending some of the functions sponsored by the recipients, meeting with recipients at EPA headquarters, and reviewing quarterly and final reports that the recipients were required to submit to EPA. In these reports, recipients give detailed descriptions of the activities that were accomplished under their awards, and, in some cases, describe the status of the overall budget, if EPA had made this a specific reporting requirement. While EPA’s Office of Administration and Resources Management encouraged project officers to conduct both on-site visits to recipients and more formal semiannual or annual project reviews, the files for our sample of 24 awards did not document that project officers were conducting these activities. Although the brownfield program managers stated that project officers were meeting informally with recipients, the project officers for two of the three recipients we audited had not visited them. The brownfield program managers explained that because of the relatively small monetary value of these awards, ranging from $20,000 to $2.7 million with a median of $168,000, the formal on-site visits were not cost-effective and that more formal reviews were not necessary because the project officers’ other monitoring activities were adequate. Once a grant or agreement has been completed, each project officer is also responsible for conducting a final closeout review to determine whether the recipient has completed all technical work and met all requirements before EPA makes or denies the final payment to the recipient and recovers any unused funds. We determined that 2 of the 11 completed awards in our sample were due to be closed out—closeout must occur within 180 days of a completed grant or agreement—and EPA had conducted both closeouts. For example, to close out a cooperative agreement issued to the Northeast-Midwest Institute, whose project period ended on March 31, 1997, the recipient submitted its final financial status report on April 25, 1997, certifying that it had spent the funds. The project officer for this cooperative agreement reviewed this report along with the recipient’s final quarterly report to close out the agreement on May 5, 1997. Project officers are not required to conduct a detailed financial audit of the recipients’ expenditures as part of the closeout review. The brownfield program mangers stated that an audit would not be cost-effective because the awards have relatively small monetary values. Instead, EPA’s regulations require recipients to maintain supporting financial records of all expenditures, such as receipts and invoices, on-site for 3 years after completion of a grant or agreement. During that period, recipients can be subject to an audit either by EPA’s OIG or a single audit under provisions in OMB Circular A-133, entitled “Audits of States, Local Governments, and Non-Profit Organizations.” According to this guidance, a recipient that spends at least $300,000 in federal funds in 1 year shall have a single or program-specific audit conducted for that year. The federal agency that has provided the most funds to the recipient for that year is responsible for coordinating that audit. The grant and agreement files we reviewed contained information that verified such single audits were being conducted, however, EPA’s awards were not sampled during these audits because of their relatively low monetary amounts. In addition, EPA OIG staff stated that they were unlikely to audit these grants and agreements unless they received information of wrongdoing. In our detailed on-site audit of the financial records for three recipients, we determined that, overall, they were spending the funds in accordance with OMB’s guidance. During our review of agency files for the 24 awards, we noted that EPA’s OGC had cautioned the program offices that initiated the awards that external reviewers might determine that some of the activities were not allowable under the statute EPA had used to make the awards. If so, recipients would have to return all funds, even if they had completed the agreed-to activities. EPA used section 311(c) of CERCLA as authority for awarding at least portions of 14 of the awards we reviewed. This section authorizes EPA to use grants, cooperative agreements, and contracts to conduct and support research on ways to detect hazardous substances and evaluate the risks they pose to human health. However, in internal memorandums to the program offices that initiated 9 of the 14 awards, EPA’s OGC stated that, while section 311(c) can be construed to authorize those awards, it did not explicitly authorize the proposed activities and thus warned that subsequent reviewers could question whether those activities were really health-related research and disallow the expenditures. For example, OGC raised this issue on a $1 million cooperative agreement authorized under section 311(c), whose recipient conducted meetings and training and issued publications to educate local communities on issues regarding Superfund, brownfields, and special concerns of minority communities located near hazardous waste sites. OGC has encouraged the program offices to seek explicit statutory authority from the Congress for the activities funded through the nine awards. According to the brownfield program managers and OGC representatives, because the statutory language is relatively broad, EPA has interpreted it to authorize the use of funds for these types of brownfield research activities. They also said that as a result, OGC did not disapprove the awards and the program offices went forward with them. Because the activities being conducted are mainly related to the Superfund program and funded with trust fund money, EPA has had to use CERCLA authority to make these awards rather than other environmental statutes, even though these other statutes more clearly provide for the types of sociological, economic, and policy research EPA has conducted with these awards. The program managers stated that although the administration, in its 1994 Superfund reauthorization proposal, did include language to clarify the authority, the bill did not pass, and the agency is considering whether to pursue clearer statutory authority through other means. We did not try to independently determine whether the nine awards were made in accordance with CERCLA because EPA’s OIG is addressing this issue as part of an ongoing review covering a broader sample of grants and agreements across numerous EPA programs and environmental statutes. We provided copies of a draft of this report to EPA for review and comment. The agency generally agreed that the report accurately describes EPA’s brownfield activities. (See app. III for a copy of EPA’s comments.) The agency asked us to clarify that it used various statutory authorities to fund the different types of brownfield activities it conducted. For example, the agency noted that it used either CERCLA section 311(c) or RCRA section 8001 to make awards for brownfield research. The authority used depended on whether the activities were to detect and assess hazardous substances and evaluate their effects on the environment or were related to more general solid and hazardous waste management activities. In response, where appropriate, we noted the statutory bases used to fund brownfield awards. The agency also noted that only portions of several of the awards we discuss in appendix II, such as its award with the University of Maryland at Baltimore for training regarding hazardous substances, are being used for brownfield activities. We had already noted this in several sections of the report because the scope of our work included any award that supported brownfield activities, either wholly or in part. Finally, EPA suggested several technical changes to the report, which we incorporated where appropriate. We performed our work from July 1997 through February 1998 in accordance with generally accepted governmental auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the EPA Administrator and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions, please contact me at (202) 512-6111. Major contributors to this report are listed in appendix IV. The Chairman of the House Committee on Commerce asked us to review EPA’s brownfield expenditures. Specifically, we were to determine (1) what activities EPA supported with the funds it targeted for brownfields in its fiscal years 1997 and 1998 budgets; (2) what criteria and approval process EPA used to award grants and agreements within its program categories for Outreach, Technical Assistance, and Research and Job Training that included brownfield funds or activities since 1993; (3) how recipients used the funds provided by the awards in these two categories; and (4) how EPA monitors and oversees these grants and agreements. To determine the activities EPA supported with its brownfield allotments and the criteria, approval process, and monitoring applied to outreach and job training awards; we contacted EPA brownfield program managers within the Outreach and Special Projects Staff, the office with jurisdiction for brownfield activities within the Office of Solid Waste and Emergency Response. We also obtained program data from two of EPA’s databases, its overall grants database and its specific brownfield awards database, and funding data from the agency’s supporting budget documents. To determine how recipients used funds in the outreach and job training program categories, we reviewed EPA’s files for 24 of the 30 grants and agreements containing some brownfield funding or activities that EPA had awarded in these two categories between fiscal years 1993 through 1997. We did not receive information for the remaining six grants and agreements that EPA had awarded late in fiscal year 1997 in time to conduct a review of EPA’s files for these awards. To obtain additional information for each grant or agreement, we contacted EPA’s Grants Administration Office, which has oversight jurisdiction for grants and agreements, and EPA’s Financial Management Office. To further test how recipients had used the funds, we selected 3 of the 11 completed outreach and job training grants and agreements for a more detailed, on-site financial audit. We focused on completed, rather than ongoing, grants and agreements because they would allow us to more fully cover all our review objectives, including questions on oversight. We focused on grants and agreements from 1993 through 1996 because 1993 was the earliest year EPA had made brownfield awards, and awards from 1997 did not have a long history of recipients’ expenditures. We audited three awards with the highest total dollar value—one within the EPA program category of outreach, technical assistance, and research; the second within the EPA program category of job training; and a third that had been awarded by an EPA regional office within either category to determine if the region used criteria and oversight that differed from headquarters. We discussed our selection of awards with EPA’s Outreach and Special Projects Staff and these program managers concurred with our selection of awards made to the Hazardous Materials Training and Research Institute at Eastern Iowa Community College District, the International City/County Management Association, and the Northeast-Midwest Institute. For the detailed audit of the three completed awards, we (1) interviewed the technical and financial managers with responsibilities for those awards; (2) reviewed a majority of the records, invoices, receipts, and other documentation that justified the expenditures in each of the budget categories; (3) determined the purpose of those expenditures; and (4) determined whether those expenditures had been made in accordance with Office of Management and Budget (OMB) circulars. This guidance included OMB Circular A-21 (rev. August 29, 1997) and OMB Circular A-122 (rev. August 29, 1997). Major activities covered by award Develop an organization to facilitate and help ensure full tribal participation in EPA’s decision-making process on waste management issues that will affect tribal health and the environment. Provide outreach and technical assistance to identify environmental issues to help ensure full tribal participation in the brownfield pilot application process. Promote ways to clean up and redevelop brownfield sites by such activities as — conducting roundtable meetings to address various brownfield issues (e.g., barriers to redevelopment and innovative approaches for brownfield revitalization), — develop and maintain a national brownfield redevelopment database of brownfield sites, and — establish a network of local officials to serve as technical experts on issues related to brownfield cleanup. Study and report on the relative importance of environmental hazards and regulatory requirements as barriers to brownfield redevelopment. Analyze the potential effects of a proposed residential capital gains tax cut and develop a workshop to market this provision for brownfield redevelopment. Study and report on policies and programs that can help reduce health risks and other problems associated with brownfield redevelopment, and identify and quantify the reduced developmental pressures on greenfields. Develop publications from a series of forums on Superfund effects on local communities. Develop a consortium and guidance manual on base closures. Produce two videos on ways governments can work together to clean brownfield sites. (continued) Major activities covered by award Develop an independent membership organization to promote environmentally and economically smart development decisions. The organization will support members by researching policies and tools on brownfield redevelopment and serve as a clearinghouse for information and peer exchange. Conduct meetings and training and issue publications to educate local governments and communities on various issues associated with contamination at hazardous waste sites, including international and other brownfield issues and local government involvement at Superfund sites. Through the creation of the Joint Center for Sustainable Communities, provide local elected officials with advice, information, and financial support on sustainable community development issues, such as brownfield redevelopment and curbing urban sprawl. Develop a model brownfield redevelopment plan for member mayors to use to adapt to their unique situations. Hold six consultations to increase the awareness of locating hazardous waste sites in low-income neighborhoods and communities of color, and publish the results and findings of those consultations. Conduct research, convene meetings, provide training, and issue publications on state and EPA issues related to the Superfund program, such as state requirements for cleanup programs and brownfield revitalization. Research and publish 20 case studies on the cleanup and reuse of brownfield sites and share the results with targeted groups of local leaders through at least two constituent meetings. To help reduce the health risks and other problems associated with brownfields, conduct a series of activities, including — monitor changes to brownfield and other cleanup legislation and publish information on these changes to educate constituents, and — publish “how-to” booklets on environmental site cleanup, workforce development, and other issues important to brownfield cleanup and redevelopment. Conduct conferences, develop models, and issue research papers on federal barriers to brownfield redevelopment and ways to achieve smart growth while protecting public health. (continued) Major activities covered by award Conduct forums to promote public discussion on brownfield redevelopment. Form a Brownfield Oversight Community Action Team to learn about and monitor the progress of community brownfield cleanups, as well as educate communities and publicize information on associated health effects, redevelopment barriers, and other brownfield issues. Compile and disseminate information, such as lessons learned, on EPA’s brownfield pilot projects. Assist members in participating in community-based brownfield redevelopment activities through public dialogues, research, and other outreach initiatives, and in establishing and maintaining a national brownfield internet site. Sponsor a conference and workshop to make developers, lenders, and local governments aware of smart growth (i.e., environmentally and economically smart decisions), brownfield redevelopment, and other issues. Publish a catalogue of organizations that focus on issues directly related to sustainable development and identify actions to overcome its barriers. Conduct several workshops for community colleges on opportunities for environmental education and training, and provide on-going follow-up and technical assistance to colleges in such issues as brownfield redevelopment. Deliver training to small and minority-owned contractors that remove hazardous waste from contaminated sites, including brownfield sites. Develop a curriculum to educate law students and practicing attorneys on a variety of human health and environmental protection issues, including brownfield redevelopment. (Table notes on next page) Note 1: Unless otherwise noted, funds were provided by EPA’s Outreach and Special Projects Staff. Note 2: Funding associated with brownfield activities includes funding from Superfund, EPA’s general funds, and other sources. An interagency agreement with the Department of Housing and Urban Development. Harriet Drummings, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the grants and agreements the Environmental Protection Agency (EPA) awarded since fiscal year 1993 under two program categories of brownfield expenditures, focusing on the: (1) criteria and process EPA used to award these grants and agreements; (2) uses recipients made of these funds; and (3) monitoring and oversight EPA provided for them. GAO noted that: (1) EPA primarily uses its brownfield funds to help state, local, and tribal governments build their capacities to assess, clean up, and revitalize brownfield sites; (2) EPA is using the majority of its $126 million in brownfield funds for fiscal years 1997 and 1998 to: (a) help these groups identify, assess, characterize, and develop clean-up plans for brownfield sites; (b) provide them with seed money to create revolving loan funds that they could use to award low-interest loans for cleanups; (c) support state development of programs that provide incentives for voluntary cleanup of sites, especially brownfields; and (d) provide outreach to groups affected by brownfields, technical assistance to them on clean-up and redevelopment methods, and research for them on brownfield issues; (3) EPA is using the remaining brownfield funds for support and other program activities, such as EPA's personal costs; (4) EPA set up four broad criteria and an approval process to award funds noncompetitively to nonprofit organizations for their unsolicited proposals to provide outreach, technical assistance, research, and job training; (5) the criteria included increasing community involvement at brownfields, promoting redevelopment, providing for site assessments, and sustaining a clean environment for the future; (6) if the proposals met one of the four criteria, the managers responsible for most of the brownfield activities explained that they generally would fund the proposals if the nonprofit organization represented unique constituents affected by brownfields, such as tribes, or offered unique brownfield expertise or experience; (7) although EPA has used the same process and criteria to award a few job training grants and agreements, it is developing a strategic plan to use as criteria for making future awards; (8) since fiscal year 1993, award recipients have used $3.7 million from the 24 outreach and job training awards GAO reviewed to conduct brownfield-specific activities; (9) award recipients used additional funds from the 24 awards for outreach and job training activities in support of the broader Superfund program or other EPA programs, although some of these activities would also indirectly help promote brownfield cleanup and redevelopment; and (10) EPA staff responsible for managing the 24 awards GAO reviewed were monitoring the overall status of the budget for each award and the content and quality of recipients' activities through various means. |
The key federal agencies involved in emergency preparedness at commercial nuclear power plants include NRC and FEMA. NRC makes radiological health and safety determinations on the overall state of emergency preparedness for a commercial nuclear power plant site— both at the plant (on-site) and in the area surrounding the plant (off-site). In addition, FEMA is responsible for providing guidance and assistance to local and state authorities and for assessing off-site radiological emergency preparedness and communicating those assessments to NRC. EPA also supports radiological emergency preparedness by developing a radiation guide that helps licensee officials and local and state authorities make decisions during a radiological incident at a commercial nuclear power plant. In 1978, a joint NRC and EPA task force issued guidance that provided a planning basis for off-site preparedness around commercial nuclear power plants and, in this guidance, the agencies established two emergency planning zones. NRC defines emergency planning zones as areas for which planning is needed to ensure that prompt and effective actions can be taken to protect the public in the event of a radiological incident at a nuclear power plant.this emergency planning zone guidance should be incorporated into emergency preparedness documents and planning. The 1978 guidance established the following two emergency planning zones: In 1980, NRC and FEMA directed that 10-Mile Plume Exposure Pathway Emergency Planning Zone. According to the guidance, the principal health risks in this zone include direct exposure to radiation and inhalation exposure from the passing radioactive plume. For the plume exposure pathway, evacuation or shelter in place should be the primary protective actions. The radius for the emergency planning zone implies a circular area, but the actual shape can depend upon the characteristics of a particular site. For example, local authorities around the Limerick Generating Station in Pennsylvania told us that the Pennsylvania Turnpike was used as a boundary for part of the 10-mile zone. This road established a well-known landmark as a boundary that could be referenced when communicating instructions to the public about a radiological incident. 50-Mile Ingestion Exposure Pathway Emergency Planning Zone. According to the guidance, the principal health risk in this zone is exposure from ingesting contaminated water or foodstuffs such as milk, fresh vegetables, or fish. In this pathway, health risks would come from longer term problems associated with contaminated food and water. Early actions to prevent contamination should include removing cows from pasture and putting them on stored feed. According to NRC officials, the 10-mile and 50-mile emergency planning zones established in 1978 remain adequate, as indicated by recent NRC studies examining potential consequences at two nuclear power plants,as well as public health impacts from the March 1979 incident at Three Mile Island in Pennsylvania and the 2011 Fukushima incident. According to NRC officials, the use of iodine to protect against the effects of radiation is recognized as an effective supplement to evacuation for situations involving radiation releases when evacuation cannot be implemented, or if exposure were to occur as a result of evacuation. Potassium iodide’s usefulness as a protective measure is limited and only affords protection for the thyroid gland from an internal radiation exposure of radioactive iodine. It does not protect the body from radiation exposure or radiation dose. depend on a number of factors, including the projected beginning and duration of the radiological release, composition and direction of the release, weather conditions, and time of day. According to FEMA guidance, certain weather conditions, plume direction, or an event caused by a terrorist attack may pose an undue risk to evacuation and could make sheltering in place the preferred protective action. The incident at the Three Mile Island nuclear power plant near Middletown, Pennsylvania, on March 28, 1979, was the most serious in U.S. commercial nuclear power plant operating history, even though it led to no deaths or injuries to plant workers or members of the nearby community. In the incident, equipment malfunctions, design-related problems, and worker errors led to a partial meltdown of the core of one reactor and small off-site releases of radioactivity. Following the March 1979 incident, the White House transferred the federal lead role for off- site emergency planning and preparedness activities from NRC to FEMA and, in 1980, NRC and FEMA entered into a memorandum of understanding to establish a framework of cooperation in matters of planning of radiological emergency preparedness.created a joint NRC and FEMA Steering Committee to implement and maintain these efforts. FEMA was directed to coordinate all federal planning for the off-site impact of radiological incidents and take the lead for assessing local and state authorities’ radiological emergency response plans, make findings and determinations on the adequacy and capability of implementing off-site emergency plans, and communicate those findings and determinations to NRC. NRC agreed to review those FEMA findings and, in conjunction with NRC findings for licensees’ emergency plans, determine the overall state of emergency preparedness. NRC uses these overall determinations to make radiological health and safety This memorandum decisions when it issues licenses to nuclear power plants and during continuous monitoring of the overall state of radiological preparedness. To manage its new responsibility for off-site emergency planning and preparedness in areas around commercial nuclear power plants, FEMA established the REP program. The REP program coordinates FEMA’s effort to provide policies and guidance to local and state authorities to ensure that they have adequate capabilities to respond and recover from a radiological incident at a commercial nuclear power plant. The REP program has two funding sources: (1) a flat fee paid by licensees that is the same for each power plant and (2) variable fees paid by licensees to cover the cost of REP program activities associated with biennial exercises that each nuclear power plant and relevant local and state authorities must conduct every 2 years demonstrating the capabilities in their radiological emergency response plans. Local and state participation in off-site radiological emergency preparedness is voluntary, but participation in the program necessitates that local and state authorities adhere to the program’s requirements set forth in federal regulations and guidance. FEMA officials told us that all local and state governments that have a 10- or 50-mile commercial nuclear power plant emergency planning zone within their boundaries participate in the REP program. If local and state authorities opted not to participate in the REP program, licensees would have to demonstrate sufficient capabilities to fulfill off-site emergency response responsibilities. In the aftermath of the Fukushima incident, NRC established the Near- Term Task Force in March 2011. The goal of the task force was to review NRC processes and regulations to determine whether additional improvements were needed to NRC’s regulatory system and to make recommendations for these improvements. The task force review resulted in 12 recommendations to NRC, including 3 related to strengthening radiological emergency preparedness. For example, the task force observed gaps in public awareness in the United States following the incident at Fukushima. It recommended that, as part of a follow-on review, NRC should pursue emergency preparedness topics related to decision making, radiation monitoring, and public education, particularly to increase education and outreach in the vicinity of each nuclear power plant in the areas of radiation, radiation safety, and the appropriate use of potassium iodide. With regard to the size of the emergency planning zones, NRC officials told us that the task force considered the existing planning structure, including the 10-mile plume exposure pathway and 50-mile ingestion exposure pathway emergency planning zones, and found no basis for recommending a change. NRC and FEMA are responsible for guiding licensees and local and state authorities in radiological emergency preparedness. Specifically, NRC and FEMA’s regulations and guidance establish the framework for on-site and off-site radiological emergency preparedness. Licensees manage preparedness on-site, and local and state authorities manage preparedness off-site. NRC and FEMA’s regulations for radiological emergency preparedness, originally issued in 1980 and 1983, respectively, are based upon 1980 guidance developed by a joint NRC and FEMA Steering Committee.The regulations include 16 planning standards that nuclear power plants and local and state authorities are to address in their radiological emergency response plans. The standards require such actions as assigning responsibilities for the licensee and local and state authorities within the 10-mile emergency planning zone; establishing procedures on when and how the licensee is to notify local and state authorities and the public; developing a range of protective actions for emergency workers and the public within this emergency planning zone, such as evacuations or recommendations that the public remain indoors; and providing and maintaining adequate emergency facilities and equipment to support the emergency response. (App. I lists the 16 planning standards.) NRC and FEMA have published four supplements that support and expand on the 1980 guidance, including a 2011 supplement that provides guidance to (1) licensees on how to develop site-specific procedures for protective action recommendations and (2) local and state authorities on how to prepare for protective actions. NRC and FEMA officials told us that they are in the early stages of developing new guidance to update the planning standards and associated guidance originally developed in 1980—an effort that they expect will take 4 to 5 years. FEMA has issued guidance for local and state authorities that includes a 2012 update to the REP program manual, the principal source of policy and guidance relating to off-site radiological emergency preparedness. The manual interprets the planning standards established in 1980 and provides additional detail to local and state authorities on what FEMA expects them to include in their radiological emergency response plans. In addition, the manual provides criteria that FEMA uses to evaluate the ability of local and state governments to implement their emergency plans. Numerous FEMA and local officials we spoke with said that counties that participate in REP planning are much better prepared for other nonradiological emergencies, like hurricanes, because of the planning and exercises required by the REP program. Licensees are responsible for managing on-site radiological emergency preparedness and developing and maintaining radiological emergency response plans that define specific actions and activities that the nuclear power plant must take to prepare for and respond to a potential incident at the plant. For example, according to the NRC regulations, the licensee must have met certain requirements, including the following: defined on-site emergency response staff responsibilities and maintained adequate staffing in key areas at all times, developed a standard emergency classification and action level scheme to help local and state authorities in determining initial off-site response measures, established notification procedures for the licensee to communicate emergency information to off-site local and state emergency organizations, and provided adequate methods and systems for assessing and monitoring actual or potential off-site consequences from a radiological incident. Under the NRC regulations, the licensee is also responsible for recommending protective actions during a radiological incident generally to be implemented by the local and state authorities responsible for off- site radiological emergency preparedness—for example, recommending which communities should evacuate, and which should shelter in place— to minimize and/or avoid exposure to a radiological release. The licensee is to make these protective action recommendations based on specific plant conditions during the emergency, and the potential for, or actual amounts of, radiation being released into the atmosphere. For example, the representatives of the licensee at the San Onofre Nuclear Generating Station in San Clemente, California, told us that they consider wind direction, evacuation impediments, and information about how long the radiological release would occur and its projected dose when making protective action recommendations. FEMA regulations and guidance apply where local and state authorities take responsibility for managing off-site radiological emergency preparedness efforts for the public near nuclear power plants. Specifically, local and state authorities develop radiological emergency response plans for their jurisdictions using the planning standards and guidance detailed in the REP program manual. These off-site plans are to define specific actions and activities that local and state emergency response organizations should take to protect the public from a potential incident at the nearby nuclear power plant. For example, according to FEMA regulations and guidance, appropriate local and state organizations are to take a number of planning actions, including the following: identifying and assigning the principal roles, such as the responsibilities for the emergency management and law enforcement personnel who lead the emergency planning, preparedness, and response functions, and the support roles for federal agencies (e.g., FEMA) and volunteer organizations (e.g., the American Red Cross); coordinating classifications for different levels of emergencies and protective action strategies that are consistent with those established by the nearby nuclear power plant; establishing and describing the methods, both primary and backup, that are to be used to communicate between all local and state governments within the emergency planning zone and with the public; and establishing and describing an emergency operations center for use in directing and controlling response functions. Differences in state laws and governing structures, as well as differences in FEMA’s regional offices, can result in off-site emergency plans that meet FEMA regulations in different ways. In some states, local officials lead off-site radiological emergency preparedness activities, with support from state emergency officials; in other states, radiological emergency preparedness activities are directed at a regional or state level. For example, the municipal, county, and school district jurisdictions near the Limerick Generating Station in Limerick, Pennsylvania, lead their own jurisdictions’ activities for radiological emergency preparedness with state support, whereas the jurisdictions near the San Onofre Nuclear Generating Station in San Clemente, California, work together on a regional, interjurisdictional planning committee to jointly develop plans and policies and to decide on radiological emergency preparedness. NRC and FEMA oversee licensees’ and local and state authorities’ radiological emergency preparedness, respectively, by reviewing emergency plans. The two agencies also oversee licensees and local and state authorities by assessing their respective capabilities during biennial emergency preparedness exercises. These oversight efforts are intended to provide reasonable assurance that adequate measures can and will be taken in the event of a radiological emergency. Under its responsibilities to protect the radiological health and safety of the public, NRC must find that there is reasonable assurance that adequate protective measures can and will be taken in the event of a radiological emergency before it issues an operating license for a nuclear power reactor. NRC is to base its overall finding of reasonable assurance on (1) its assessment of the adequacy of a licensee’s on-site emergency plans and (2) a review of FEMA findings about whether local and state off-site emergency plans are adequate and whether there is reasonable assurance they can be implemented. NRC officials told us that they also review license conditions for the facility. expected to result in a reduction in effectiveness, the licensee must provide the plan changes and supporting documentation to NRC for review and approval to ensure the plan continues to meet the required planning standards. For example, if an off-site fire department is identified and relied upon in the licensee’s emergency plan, but is no longer able to respond to the site because of conflicting responsibilities assigned in local emergency plans, then the licensee must identify plan changes that ensure that the original capability exists in some form. To help maintain its finding of reasonable assurance on-site, NRC has established a reactor oversight process that describes the agency’s program to inspect, measure, and assess the safety performance of commercial nuclear power plants and to respond to any decline in performance. One of the cornerstones of this process is emergency preparedness, and NRC measures the effectiveness of power plant staff in carrying out emergency plans and testing licensee emergency plans during biennial exercises. NRC’s resident inspectors, who are permanently located at the plant, as well as inspectors from its regional offices, are to ensure that the licensee is effectively implementing and reviewing emergency preparedness, according to NRC’s reactor oversight process. As part of the licensing process, NRC also requires nuclear power plants to develop studies of estimated evacuation times in order to identify potential challenges to efficient evacuation in the event of a nuclear power plant incident. These studies are to include an analysis of the time required to evacuate different portions of a nuclear power plant’s 10-mile planning zone. Licensees are to (1) use these evacuation time estimates in formulating protective action recommendations and (2) provide the estimates to local and state authorities for use in developing off-site protective action strategies. To account for demographic changes around commercial nuclear power plants, NRC revised regulations, effective December 2011, to require that these evacuation time estimates be updated (1) after every decennial census and (2) any time an increase in the permanent population results in an evacuation time increase of 25 percent or 30 minutes, whichever is less for one of the potential evacuation areas. In addition, 2011 NRC guidance directs that evacuation time estimates include a consideration of shadow evacuations, defined as an evacuation of the public in areas outside an officially declared evacuation area.time estimate studies should include a shadow evacuation consideration of 20 percent of the population out to 15 miles away from the nuclear power plant. In addition, NRC guidance states that the shadow population consideration is to account for the extent to which this population’s evacuation would impede the evacuation of those under evacuation orders. Specifically, the guidance states that these evacuation As stated above, NRC considers FEMA’s reviews of each local and state authority’s emergency plan for off-site preparedness during the initial licensing process. In reviewing these plans, FEMA uses the planning standards and evaluation criteria collectively identified in its regulations and guidance to determine whether these off-site plans are adequate to protect public health and safety by providing reasonable assurance that appropriate protective measures can be taken off-site in the event of a radiological emergency. In addition, the local and state authorities must participate in an initial full-scale exercise with the licensee. approves the authorities’ plans as being adequate if the plans and the exercise provide reasonable assurance that the plans are adequate and can be implemented. FEMA then communicates this approval or disapproval to the state and to NRC for consideration in the licensing process. To support the initial licensing process, the licensee and local and state authorities must conduct a full-scale exercise. According to FEMA guidance, the full-scale exercise should include all response organizations that would be involved in a response to an incident at the plant, but subsequent biennial exercises may not need to include all response organizations. Biennial exercises. Each nuclear power plant and its relevant local and state authorities must conduct an exercise every 2 years that demonstrates their abilities to implement their respective emergency plans. Local and state authorities that participate in the biennial exercises submit their emergency plans to FEMA staff before the exercises, and FEMA evaluators review the plans and assess off-site performance based on the activities identified in those plans. Annual letter of certification. To help FEMA determine whether local and state authorities’ plans and implementation activities provide reasonable assurance about off-site radiological preparedness, states that participate in the REP program annually submit a letter of certification to FEMA providing assurance that all required activities have been undertaken as appropriate by local and state authorities. Among other things, the state is asked to certify that the plans have been reviewed for accuracy and completeness and provide documentation that supports off-site planning, such as the information that authorities are required to provide annually to those in the 10-mile emergency planning zone (e.g., establishing classifications for different levels of emergencies and protective action instructions) Staff assistance visits. FEMA assigns a representative to serve as the primary advisor for the local and state authorities near each nuclear power plant, and the FEMA representatives are to visit local and state authorities to answer questions and assist in planning and exercise preparation. Local authorities near the Limerick Generating Station in Limerick, Pennsylvania, told us that they are in contact with FEMA representatives several times a month, including during site assistance visits, and feel they have a well-established relationship with FEMA. FEMA may also review off-site emergency preparedness following events such as electric grid blackouts, intentional harm, or natural disasters in the vicinity of commercial nuclear power plants, which can result in infrastructure damage that can degrade the capabilities of local and state authorities to respond to a radiological incident. For example, natural disasters that destroy roads or bridges around a plant could affect the ability of local and state authorities to effectively conduct evacuations. According to the memorandum of understanding between NRC and FEMA, FEMA is to (1) inform NRC promptly if FEMA questions the continued adequacy of off-site emergency preparedness, (2) review off- site radiological emergency preparedness if it believes that a review is necessary to determine whether off-site preparedness remains adequate, and (3) inform NRC in writing about the results of its review. NRC is to consider the information FEMA provides, in addition to its assessment of the licensee’s facility, in deciding to allow the restart or continued operation of an affected operating nuclear power plant. For example, after Hurricane Katrina in 2005, the Waterford Nuclear Generating Station in Killona, Louisiana, was shut down for about 2 weeks. FEMA conducted a review of local and state authorities’ ability to respond to a radiological incident and concluded that off-site radiological preparedness was adequate to justify restarting the plant. NRC and FEMA also oversee radiological emergency preparedness by reviewing the biennial exercises conducted by licensees and local and state authorities. According to NRC and FEMA guidance, these exercises simulate incidents at nuclear power plants that require coordination between licensees, local and state authorities, and federal entities, and provide the opportunity for NRC and FEMA officials to evaluate the emergency plans in action. According to NRC guidance, NRC’s inspectors are to observe these biennial exercises to evaluate the adequacy of the licensee’s performance, including the operation of the alert and notification system and the individual performance of the emergency response staff.the licensee’s ability to assess and critique its own performance to identify and correct weaknesses observed during the exercise. For example, one NRC inspector told us that he observes how the licensee responds to an escalation of events, prepares and issues protective action recommendations, and makes assessments of radiological doses during an exercise. The inspector also said that he observes whether the In addition, NRC inspectors are to evaluate licensee is able to identify its own performance problems and then takes the necessary corrective actions. With respect to off-site evaluation, under agency guidance, FEMA evaluators are to observe the conduct of local and state authorities and write detailed after-action reports that identify planning and performance problems, if any. FEMA is also to work with the local and state authorities to develop an improvement plan that contains information on how the authorities will improve performance or correct problems identified in the after-action report, the personnel responsible for specific actions, and an anticipated timeline for improvement or correction. Local and state authorities are expected to correct the problem or redemonstrate a capability within a specified time frame. If the local and state authorities do not address the problems, FEMA officials told us that they would notify NRC that off-site preparedness was insufficient to protect public health and safety. Furthermore, NRC officials told us that they could require the plant to shut down until the off-site problems were addressed but that they have never required such a shutdown. FEMA officials told us that FEMA and NRC also participate in some aspects of scenario development. classification levels are considered trigger points for surrounding authorities, so that when a certain level is set, a series of actions must be performed. The classification levels are the following: Notification of unusual event—a potential degradation of safety or indication of a security threat that involves no expected release of radiation unless further degradation occurs. Alert—an actual or potential substantial degradation of safety at the plant or a security event that involves probable life threatening risk to site personnel or damage to site equipment, and any release is expected to be limited to small fractions of EPA protective action guide exposure levels. Site area emergency—an actual or likely major failure of plant protection equipment that protects the public or a security event that could lead to the likely failure of or prevents access to plant protection equipment. Any radiation releases are not expected to exceed exposure levels from the EPA protective action guides beyond the site boundary. A site area emergency may, for example, trigger precautionary evacuations of schools and parks. General emergency—an actual or imminent substantial core degradation or melting with the potential for loss of containment of radiation or security events that result in an actual loss of physical control of the facility, with radiation releases reasonably expected to exceed EPA protective action guide exposure levels off-site for more than the immediate area. The Three Mile Island incident is the sole general emergency ever to have occurred in the United States. Since the Three Mile Island incident, in Pennsylvania, it is state policy to conduct a single protective action for the full 10-mile emergency planning zone whenever a general emergency at a nuclear power plant in the state is declared. If the situation on-site reaches a level where local and state authorities should consider taking protective actions, the licensee is to recommend the protective actions that off-site authorities should take. The authority responsible for decision making is to take the licensee’s recommendation into consideration, together with other considerations, such as radiological dose assessment readings. We noted that local and state authorities near the four power plants we visited have dose assessment teams that are to take radiation readings in the area during an incident and coordinate radiation readings with licensee dose assessment teams to help assess the situation and to help with overall decision making. In some states, the local counties and cities have the primary role in making protective action decisions. For example, around the St. Lucie Nuclear Power Plant in Florida, county officials for several counties told us that they develop individual emergency plans, but they work together during an emergency to coordinate the appropriate protective action decision for the area. According to local and state authorities around the San Onofre Nuclear Generating Station in California, two counties, three cities, one U.S. Marine Corps base, the California Department of Parks and Recreation, and the power plant licensee formed an interjurisdictional planning committee to coordinate an emergency plan for the area. Members of this committee collectively decide on the protective action to take during an incident, based on the licensee’s protective action recommendation. NRC and FEMA require licensees and local and state authorities to provide emergency preparedness information annually to the public within the 10-mile emergency planning zone, and NRC has studied public awareness within the zone. A 2008 NRC study found that the public within the 10-mile zone is generally aware of emergency preparedness and likely to follow instructions, but NRC has not studied likely responses to an incident outside this zone. Without knowing reactions outside the 10-mile zone, NRC cannot be confident that its estimates of shadow evacuations outside the 10-mile zone provide a reasonable basis for planning off-site protective action strategies. NRC and FEMA’s regulations and guidance establish the framework for how licensees and local and state authorities are to inform the public about how to respond during a radiological emergency and provide educational information about radiation. Specifically, NRC regulations require that licensees annually provide basic emergency planning information to the public within the 10-mile emergency planning zone. This information may take various forms, including brochures, telephone book inserts, or calendars. According to NRC and FEMA guidance, these materials must include educational information on radiation, protective measure information such as evacuation routes and relocation centers, information relating to the special needs of the handicapped, and a contact for additional information. FEMA guidance also states that the content of these materials is generally determined through coordination between local and state authorities and the licensees. For example, St. Lucie County authorities in Florida told us that they develop the annual mailing in cooperation with neighboring Martin County authorities, so that both counties use the same materials, and then the licensee prints and distributes this mailing to households and businesses within the 10-mile emergency planning zone around the nuclear power plant in St. Lucie, Florida. Licensees told us that they translate these materials into non- English languages and make them available to the public, depending on the demographic makeup of their communities, as directed by FEMA guidance. NRC and FEMA updated their guidance on these public information programs in November 2011 to provide more information about protective actions in public information materials. For example, the 2011 guidance recommends that local and state authorities explain the purpose of staged evacuations, define expectations for those under an advisory, clarify expectations for those who are not at home when a protective action is ordered, and discourage parents from picking up their children from school during an event. Staged evacuation occurs when the population in one area is evacuated, while the population in another area is told to remain indoors until it is their turn to evacuate. According to NRC guidance, the success of staged evacuation depends on public compliance with sheltering in place while the population most at risk is evacuated. NRC’s research on the matter has suggested that the public requires clear and direct communication both to evacuees and to those near, but not within, affected areas. FEMA guidance also instructs local and state authorities to conduct outreach to certain special needs populations within the 10-mile emergency planning zone. Specifically, FEMA’s guidance instructs licensees and local and state authorities to take the following actions: Conduct outreach to transient populations. Authorities may, for example, issue pamphlets, stickers, or signs in hotels, motels, and public parks. Local and state authorities in communities around San Onofre Nuclear Generating Station near San Clemente, California, told us that they provide a card to campers when they enter state parks that tells them what to do and which radio stations to tune into in the event of an emergency at the nuclear power plant; are working with hotels to include emergency information in each hotel room; and have trained hotel managers and staff to make sure they are registered with the appropriate jurisdiction to receive emergency alerts. Have a plan to identify individuals who need assistance when evacuating. Some local and state authorities told us that they accomplish this by including a card inside the annual mailing that enables residents with special needs to identify their needs and complete and mail back the card to the licensee or to their local authorities, so that the authorities can track special needs individuals during an emergency. FEMA guidance also instructs local and state authorities to establish coordinated arrangements for dealing with rumors and unconfirmed reports to provide the public with direct access to accurate information during an incident, as well as to provide local and state authorities with information about trends in public inquiries. For example, some local and state authorities told us that that they have established dedicated public information telephone numbers and assigned staff to answer questions in the event of an incident. Local and state authorities we spoke with varied in their use of social media forums to monitor and respond to rumors before or during an incident. Some local and state authorities near the four plants we visited said that they use social media to provide preparedness information, while others said they do not use social media. FEMA officials told us that they are currently studying different social media technologies and how information is disseminated to the public. In addition, local and state authorities are required to conduct annual efforts to brief news media on emergency plans, radiation information, and their points of contact in an emergency. State of Florida authorities go beyond these requirements and told us they conduct two media briefings annually, hold an annual press conference with the Lieutenant Governor, and provide radiological emergency preparedness information sheets to the press. Local and state authorities we spoke with told us that they conduct other voluntary activities to inform the public in the 10-mile emergency planning zone. These activities include informing residents about annual siren testing required in the zone, conducting presentations to community groups and at local events, providing information to parents at schools, and posting information on websites. Authorities we spoke with said that some of these voluntary activities may also occur outside the 10-mile emergency planning zone. NRC and FEMA do not require public information efforts for radiological emergency preparedness outside the 10-mile emergency planning zone. According to NRC and FEMA guidance, for the worst incidents at commercial nuclear power plants, immediate life-threatening radiation doses would generally not occur outside the 10-mile zone and would probably not require protective actions outside the zone. In the 50-mile emergency planning zone, the principal exposure to radiation would be ingestion of contaminated food and water, and this would represent a longer term problem. According to FEMA guidance, the licensee and state authorities are to make information available to farmers and other members of the agricultural industry within the 50-mile emergency planning zone. This information is to describe recommended protective actions for agricultural industries to minimize contamination of the food supply. Some state and local authorities told us that they sometimes conduct public education efforts outside of the 10-mile zone that include radiological emergency preparedness information. However, some authorities also expressed concerns about the radiological awareness levels of residents and the potential for shadow evacuations. For example, Los Angeles County authorities told us that one of their greatest concerns in the event of an incident at the San Onofre Nuclear Generating Station is a rumor that results in shadow evacuations, which could result in clogged highways as people who are not in danger choose to evacuate unnecessarily. In 2008, NRC conducted a study with the Sandia National Laboratory to examine public awareness of emergency preparedness information and likely responses within the 10-mile emergency planning zone. The laboratory administered a national telephone survey to random members of households within each of the 63 10-mile emergency planning zones. According to the study results, those surveyed were generally well- informed, with many having taken action to prepare for an emergency. Furthermore, most of those who responded to the survey reported that they believe they are likely to follow directions from local and state authorities in the event of an incident at the nuclear power plant. However, about 20 percent of those responding to the survey reported that they would evacuate even when told evacuation for them was not necessary, referred to as a shadow evacuation. NRC guidance states that a shadow evacuation can impede the evacuation of those under evacuation orders. Also, most of those who responded to the survey and who have children in school reported that they were likely to pick up their children from school in an emergency. Using the findings from this study, NRC updated guidance on protective action strategies and improving public information programs in November 2011. This guidance recommends that local and state authorities provide more information about the purpose of staged evacuations, in addition to simply describing the different types of protective action strategies. To address the potential for shadow evacuations, NRC officials told us that they used the study results to determine the potential shadow evacuation rate outside the 10-mile emergency planning zone. Specifically, NRC instructed licensees to consider shadow evacuations of 20 percent of the public out to 15 miles from the nuclear power plant when the licensee develops estimates of evacuation times. However, the study surveyed residents inside the 10-mile emergency planning zone, a population that is given radiological emergency preparedness information every year and that is therefore more familiar with the power plant, radiation risks, protective actions, and evacuation routes than the public outside the 10-mile zone. Without this same level of information, those outside the zone may not respond in a similar manner to a radiological incident as those inside the zone. Because the survey was conducted on a relatively more educated and aware population, the 20- percent rate for shadow evacuations may not accurately capture the level of shadow evacuations that may occur outside the 10-mile zone. According to NRC and FEMA officials, their agencies have not examined public awareness outside the 10-mile emergency planning zone and therefore do not know if a 20-percent estimate of shadow evacuations is reasonable. Therefore, licensee evacuation time estimates may not accurately consider the impact of shadow evacuations. Without estimates of evacuation times based on more solid understanding of public awareness outside the 10-mile zone, licensees and NRC and FEMA cannot be confident about the reliability of their estimates. If shadow evacuations are not correctly estimated, planning for a radiological emergency may not sufficiently consider the impact of the public outside the 10-mile emergency planning zone. Shadow evacuations outside this zone greater than the assumed 20-percent rate would put additional traffic on roadways, possibly delaying the evacuation of the public inside the emergency planning zone and potentially increasing the risk to public health and safety. The Fukushima Daiichi incident raised questions about the U.S. government’s ability to protect its citizens if a similar incident were to occur here. NRC and FEMA have developed regulations and guidance to help licensees and local and state authorities create and test radiological emergency response plans that are intended to provide reasonable assurance that they can adequately protect public health and safety in the event of a radiological incident at a nuclear power plant. NRC regulations and guidance also direct licensees to annually provide emergency preparedness information to the public within their 10-mile emergency planning zones. Furthermore, the 2008 NRC study by Sandia National Laboratory demonstrated that the public within these planning zones is generally likely to respond to instructions from local and state authorities in the event of an incident. On the basis of this study, NRC estimated that 20 percent of the public within the zones would choose to evacuate even when told evacuation for them was not necessary (shadow evacuations). NRC then directed licensees to consider this same percentage of shadow evacuations for the public outside the planning zone when estimating evacuation times. However, communities outside the 10-mile zone generally do not receive the same level of information as those within the 10-mile zone and therefore may not be as knowledgeable about appropriate conduct during a radiological emergency as those inside the zone and may not respond in a similar manner. If the public outside the zone evacuates unnecessarily at a greater rate than expected, these shadow evacuations would put additional traffic on roadways, possibly delaying the evacuation of the public inside the emergency planning zone and potentially increasing the risk to public health and safety. However, because neither NRC nor FEMA have examined public awareness outside of the 10-mile emergency planning zone, they do not know how the public outside this zone will respond. Specifically, they do not know if a 20-percent estimate of shadow evacuations is reasonable. Therefore, licensee evacuation time estimates may not accurately consider the impact of shadow evacuations. Without estimates of evacuation times based on more solid understanding of public awareness, licensees and NRC and FEMA cannot be confident about the reliability of their estimates. If shadow evacuations are not correctly estimated, planning for a radiological emergency may not sufficiently consider the impact of the public outside the emergency planning zone. To better inform efforts for nuclear power plant emergency preparedness and planning, we recommend that NRC Commissioners obtain information on public awareness of radiological emergency preparedness for communities outside the 10-mile emergency planning zone and the likely response of those communities in the event of a radiological incident at a nuclear facility and consider how these results may affect estimates for shadow evacuations outside the zone. We provided a draft of this report to the NRC Commissioners and the Secretary of the Department of Homeland Security for their review and comment. DHS provided no written comments. NRC provided written comments on the draft report, which are reproduced in appendix II, and technical comments from NRC, which we incorporated into the report as appropriate. NRC found our discussion of emergency preparedness programs at nuclear power plants to be complete, but generally disagreed with our finding on shadow evacuations. Specifically, NRC did not believe that the report accurately captured the technical basis for the NRC’s use of 20 percent as a reasonable estimate of shadow evacuations beyond 10 miles. NRC explained that it has conducted considerable research on evacuations and has confidence that shadow evacuations generally have no significant impact on traffic movement. Lastly, NRC stated that the licensee’s current emergency planning bases continue to provide reasonable assurance of protection of the public’s health and safety. We stand by our finding and the related recommendation that NRC should obtain information on public awareness and the likely responses of communities outside the 10-mile zone in the event of a radiological incident at a nuclear power plant. First, as stated in the report, NRC issued guidance in 2011 that directs licensees to consider shadow evacuations of 20 percent of the population located from 10 miles to 15 miles from a nuclear plant when estimating evacuation times. NRC told us that this shadow evacuation estimate came primarily from a telephone survey it conducted of the public within each of the 63 10-mile emergency planning zones around the country. However, residents inside the 10-mile zone are provided radiological information every year and are therefore more familiar with the power plant, radiation risks, protective actions, and evacuation routes than those outside the zone. Without this same level of information, residents outside the zone may not respond in a similar manner as those inside the zone, and the use of a 20 percent shadow evacuation estimate for the public outside the zone may therefore not be reliable. Second, NRC asserts that it has conducted considerable research on evacuations and has confidence that shadow evacuations generally have no significant impact on traffic movement. GAO acknowledges that NRC has conducted research on evacuations, but these studies are generally based on evacuations that have resulted from non-nuclear incidents such as hurricanes, wildfires, and chemical spills. It is unclear whether the public would behave the same for a nuclear evacuation as it would for the incidents that NRC has studied. As we say in the report, NRC’s Near- Term Task Force established after the Fukushima incident observed gaps in public awareness in the United States. The task force recommended that, as part of a follow-on review, NRC should pursue emergency preparedness topics related to decision-making, radiation monitoring, and public education, particularly to increase education and outreach in the vicinity of each nuclear power plant in the areas of radiation, radiation safety, and the appropriate use of potassium iodide. We believe the task force’s finding that there are gaps in public awareness and understanding regarding nuclear incidents supports our recommendation that NRC should obtain information on public awareness and the likely responses of communities outside the 10-mile zone in the event of a radiological incident at a nuclear power plant. Finally, with regard to NRC’s confidence that shadow evacuations generally have no significant impact on traffic movement, according to NRC’s 2011 guidance mentioned earlier, evacuation time estimate studies should include a shadow evacuation consideration of 20 percent of the population out to 15 miles away from the nuclear power plant because the additional traffic generated has the potential to impede an evacuation of the emergency planning zone. Thus, NRC has previously acknowledged in its guidance that traffic from shadow evacuations may impede the intended evacuations. For these reasons, we believe our recommendation to improve NRC’s understanding of the effect of shadow evacuations outside of the 10-mile zone is consistent with NRC’s guidance about the potential effects of shadow evacuations on evacuations within the emergency planning zone. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of the NRC, the Secretary of the Department of Homeland Security, the appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Frank Rusco at (202) 512-3841 or ruscof@gao.gov or Stephen Caldwell at (202) 512-9610 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Primary responsibilities for emergency response by the nuclear facility licensee and by state and local organizations within the emergency planning zones have been assigned, the emergency responsibilities of the various supporting organizations have been specifically established, and each principal response organization has staff to respond and to augment its initial response on a continuous basis. On-duty facility licensee responsibilities for emergency response are unambiguously defined, adequate staffing to provide initial facility accident response in key functional areas is maintained at all times, timely augmentation of response capabilities is available, and the interfaces among various on-site response activities and off-site support and response activities are specified. Arrangements for requesting and effectively using assistance resources have been made, arrangements to accommodate state and local staff at the licensee’s emergency operations facility have been made, and other organizations capable of augmenting the planned response have been identified. A standard emergency classification and action level scheme, the bases of which include facility system and effluent parameters, is in use by the nuclear facility licensee, and state and local response plans call for reliance on information provided by facility licensees for determinations of minimum initial off-site response measures. Procedures have been established for notification, by the licensee, of state and local response organizations and for notification of emergency personnel by all organizations; the content of initial and follow up messages to response organizations and the public has been established; and means to provide early notification and clear instruction to the populace within the plume exposure pathway emergency planning zone have been established. Provisions exist for prompt communications among principal response organizations to emergency personnel and to the public. Information is made available to the public on a periodic basis on how they will be notified and what their initial actions should be in an emergency (e.g., listening to a local broadcast station and remaining indoors), the principal points of contact with the news media for dissemination of information during an emergency (including the physical location or locations) are established in advance, and procedures for coordinated dissemination of information to the public are established. Adequate emergency facilities and equipment to support the emergency response are provided and maintained. Adequate methods, systems, and equipment for assessing and monitoring actual or potential off-site consequences of a radiological emergency condition are in use. A range of protective actions has been developed for the plume exposure pathway emergency planning zone for emergency workers and the public. Guidelines for the choice of protective actions during an emergency, consistent with federal guidance, are developed and in place, and protective actions for the ingestion exposure pathway emergency planning zone appropriate to the locale have been developed. Means for controlling radiological exposures, in an emergency, are established for emergency workers. The means for controlling radiological exposures shall include exposure guidelines consistent with EPA emergency worker and lifesaving activity protective action guides. Arrangements are made for medical services for contaminated injured individuals. General plans for recovery and reentry are developed. Periodic exercises are (will be) conducted to evaluate major portions of emergency response capabilities, periodic drills are (will be) conducted to develop and maintain key skills, and deficiencies identified as a result of exercises or drills are (will be) corrected. Radiological emergency response training is provided to those who may be called on to assist in an emergency. Responsibilities for plan development and review and for distribution of emergency plans are established, and planners are properly trained. In addition to the individuals named above, Kimberly Gianopoulos, Assistant Director; Nathan Gottfried; Eugene Gray; David Lysy; and David Messman made key contributions to this report. Important contributions were also made by Elizabeth Beardsley, Marcia Crosse, R. Scott Fletcher, Jonathan Kucskar, Steven Putansu, Dan Royer, Carol Shulman, and Kiki Theodoropoulos. | On March 11, 2011, a tsunami severely damaged the Fukushima Daiichi nuclear power plant in Japan and led to the largest release of radiation since the 1986 Chernobyl disaster. Japanese authorities evacuated citizens within 19 miles of the plant. GAO was asked to examine issues related to emergency preparedness at nuclear power plants. This report examines (1) federal, licensees, and local and state authorities responsibilities in radiological emergency preparedness, (2) the activities NRC and FEMA take to oversee licensee and local and state radiological emergency preparedness, and (3) NRC and FEMA requirements for informing the public on preparedness and NRCs understanding of public awareness. GAO reviewed laws, regulations, and guidance; examined emergency plans from licensees and local and state authorities; visited four nuclear power plants; and interviewed federal, local and state, and industry officials. The U.S. Nuclear Regulatory Commission (NRC) and the Federal Emergency Management Agency (FEMA) are collectively responsible for providing radiological emergency preparedness oversight and guidance to commercial nuclear power plant licensees and local and state authorities around the plants. In general, NRC is responsible for overseeing licensees' emergency preparedness at the plant (on-site), and FEMA is responsible for overseeing preparedness by local and state authorities around the plant (off-site). NRC and FEMA have also established a 10-mile emergency planning zone around nuclear power plants. Licensees are responsible for managing on-site radiological emergency preparedness and developing and maintaining plans that define activities that the nuclear power plant must take to prepare for and respond to a potential incident at the plant. Participating local and state authorities within the 10-mile zone must develop protective actions for responding to a radiological incident, including plans for evacuations and sheltering in place. A recent NRC task force considered the adequacy of the zone size and concluded that no change was currently needed but will be re-evaluated as part of its lessons learned efforts for the Fukushima incident. NRC and FEMA conduct activities to ensure that licensees and local and state authorities have adequate plans and capabilities to respond to a radiological incident. For example, NRC and FEMA review emergency plans developed by licensees and local and state authorities to ensure that planning standards are met. In addition, NRC and FEMA observe exercises for each plant that licensees and local and state authorities conduct every 2 years to demonstrate their ability to respond to an incident. NRC also requires licensees to develop estimates of how long it would take for those inside the 10-mile zone to evacuate under various conditions. Licensees are to provide these evacuation time estimates to local and state authorities to use when planning protective action strategies. NRC and FEMA require licensees and local and state authorities, respectively, to provide information annually on radiation and protective actions for the public only inside the 10-mile zone. Those in the 10-mile zone have been shown to be generally well informed about these emergency preparedness procedures and are likely to follow directions from local and state authorities in the event of a radiological emergency. In contrast, the agencies do not require similar information to be provided to the public outside of the 10-mile zone and have not studied public awareness in this area. Therefore, it is unknown to what extent the public in these areas is aware of these emergency preparedness procedures, and how they would respond in the event of a radiological emergency. Without better information on the public's awareness and potential response in areas outside the 10-mile zone, NRC may not be providing the best planning guidance to licensees and state and local authorities. To better inform radiological emergency preparedness efforts, GAO recommends that NRC obtain information on public awareness and likely public response outside the 10- mile zone, and incorporate insights into guidance, as appropriate. NRC generally disagreed with GAO's finding, stating that its research shows public response outside the zone would generally have no significant impact on evacuations. GAO continues to believe that its recommendation could improve radiological emergency preparedness efforts and is consistent with NRC guidance. |
Since 1993, DLA has operated with two defense distribution regional headquarters, an eastern headquarters in New Cumberland, Pennsylvania, and a western headquarters in the vicinity of Stockton, California. These regional headquarters provided operational oversight to over 20 geographically dispersed distribution depots. See figure 1 for DLA’s distribution structure prior to the decision to consolidate the regional headquarters. DLA reduced its number of distribution regions from three to two in 1993 and soon thereafter began exploring the potential of having just one. Although a study was initiated, it was not finalized and no proposals or recommendations were approved. During the 1995 base realignment and closure (BRAC) process, DLA examined the military value of the eastern and western regional headquarters and found that they rated nearly equal. At that time, DLA officials concluded that changing the command and control structure would present significant risks in the efficient management of day-to-day operations and the ability to effectively support two major regional conflicts simultaneously. Further, they determined that span of control of future operations and the requirement to continue to accommodate contingency, mobilization, and future total force requirements made two regions essential. Subsequent to BRAC 1995, DLA officials revisited the consolidation issue and began another study. According to DLA headquarters officials, a preliminary assessment was made and the study was terminated before any recommendations were made. Meanwhile, DLA continued to restructure its distribution organization. Then, in February 1997, DLA reinitiated an effort to consolidate the two regions. This was expected to result in the creation of a single distribution command, known as the Defense Distribution Center. Under this plan, the center was to assume the regional distribution functions and manage the two primary distribution sites and all remaining distribution depots. Two steering groups were established to work concurrently on the consolidation effort. A “missions and functions steering group” was established to determine the distribution center staffing requirements and organizational design. A “site selection steering group” (hereafter referred to as the steering group) was established to develop a decision process to determine a recommended site. The Principal Deputy Director of DLA was responsible for selecting the site in consultation with the DLA Director and other senior officials who made up DLA’s executive leadership team. The site selection process included the use of a contractor, KPMG Peat Marwick, LLP to assist the steering group in developing the decision model and identifying the data needed. The contractor and an experienced DLA facilities engineer gathered, validated, and evaluated the data used in the model. The steering group had 12 members, with 6 voting members from various offices within DLA headquarters and the Office of the Secretary of Defense; 2 voting members each from the eastern and western regions; and 2 nonvoting members—the steering group chair (a military colonel) and a Department of Defense (DOD) Inspector General representative, who participated as an advisor. The group developed cost and other site selection criteria that were approved by the selecting official and the executive leadership team. (See table 1.) While costs played a major role in the evaluation, the preferred site did not necessarily have to be the site with the lowest costs. Rather, it was the one with the highest point total. Three sites were initially considered for the new center: DLA headquarters at Fort Belvoir, Virginia, and the existing eastern and western regional headquarters. The Fort Belvoir site dropped out before completion of the first data request due to a lack of available space at the headquarters building. The steering group sponsored two rounds of data requests from officials at the competing sites. According to KPMG, the responses to the first data request were not used because some of the questions were not clear and the respondents did not fully understand them. Thus, a second data request was required. For the second data request, the questions were redefined in an attempt to be clearer and obtain better information. The steering group expected that responses to the second data request would be analyzed using the decision model to identify the preferred site. Analysis of the second data request was completed on August 11, 1997. Although the steering group was not convened to review the results, the steering group chair and KPMG jointly presented the results to the selecting official (DLA’s Principal Deputy Director), who told us that he was dissatisfied with how some aspects of the approved methodology had been implemented and saw the need for additional data. He requested revisions, including having a second facilities engineer, not previously involved in the process, oversee the collection and validation of some new data—essentially a third data request. The steering group was not made aware of the selecting official’s actions, including the third data request, until the group was given both the results of its work on the second data request and the revisions, on September 15, 1997. On the basis of the results of the revised study, the eastern location was selected by the Principal Deputy Director of DLA as the site for the new center also on September 15, 1997. As of October 1, 1997, DLA had officially established its command and control of all distribution functions at New Cumberland, Pennsylvania. As of February 25, 1998, personnel performing distribution headquarters functions at the western location were reporting to management at the eastern location, and DLA was in the process of implementing other aspects of the consolidated operation at that location. The process used by DLA to support the site selection decision for its consolidated distribution headquarters contained several weaknesses, including insufficient data on personnel and facilities requirements, a questionable methodology for evaluating and comparing costs, and subjective responses used by steering group members for two criteria. Subsequent changes to the process, made at the request of the selecting official, did not correct these weaknesses and created concerns about the perception of bias. Also, these actions significantly altered investment and operating cost results between the second and third data requests. (See apps. I and II.) Because information on staff size and functions was being determined concurrently with the site selection process, the steering group was not given complete information on the staffing requirements, organizational design, and facility requirements for the new headquarters. Because these requirements have a substantial impact on space utilization and costs, it is important that they be properly defined in advance of facility space planning. The steering group was initially told by senior DLA management that an estimated 400 persons would be needed to staff the consolidated distribution headquarters. Therefore, the first data request asked officials at the competing sites for facility requirements and costs based on the requirements of 400 staff. Subsequently, the missions and functions steering group provided an estimated personnel strength of 347 persons, which was used in the second data request. In both instances, the competing locations were not given a more detailed breakdown of the operational functions or the number of persons associated with them. Respondents from the regional locations stated that they could only estimate floor plan requirements and then compute associated costs. According to a KPMG official, the floor plans developed by each site contained certain unrealistic aspects and did not present a clear picture of the investment costs that would be required. Although the structure and functions of the new headquarters were determined prior to the third data request by the mission and functions steering group, DLA officials considered the information too sensitive to release to the competing locations because it could lead to speculation about layoffs. For the third data collection effort, requested by the selecting official, a DLA facilities engineer provided the regional respondents with the number of functions and the number of staff per function, but not the identity of the functions. As a result, the third data request also resulted in hypothetical floor plans and associated costs. Officials at the eastern location told us that, after becoming aware of the final plans for staffing, they made some good guesses in identifying some of the functions and could have implemented their floor plan if required to do so. However, they did not believe the floor plan presented the optimal solution. For example, although they knew they needed a law library, they did not know which function it would be associated with; therefore, their floor plan resulted in locating their legal staff offices at one end of one building and the law library at the opposite end of a second building. Although some changes were to be expected, according to regional officials, the lack of definitive information meant that neither site’s floor plan would have been fully implemented if selected. DLA officials are still in the process of finalizing the floor plan for their headquarters location in New Cumberland, but the officials said they do not expect the costs to exceed the estimates provided in the site selection competition. Nevertheless, questions still exist regarding what differences might have existed between the plans and costs initially provided by the two competing locations if they had had a clearer picture of the performing functions and space requirements. Both steering group members and regional respondents said the lack of information on personnel and facilities hampered their ability to perform their tasks. A KPMG official agreed, noting that although he had been involved in numerous site selections, this was the first time he had participated in a selection process in which the functions and related staffing had not been determined before the site selection process began. Analysts often assign varying weights to evaluation criteria in this type of analysis to distinguish the relative importance of individual criteria. This approach is also used to assign different weights between cost and noncost variables to distinguish their relative importance. However, DLA’s site selection steering group assigned different weights to individual cost criteria, which produced a distorted picture of the comparative costs of the two competing locations. For example, a dollar spent on a service order became more significant or of more value than a dollar spent on real property maintenance. The distortion caused by this weighting was so significant for the second data request that, even though the eastern location was $3.8 million more expensive than the western location based on a straight comparison of costs, the assignment of points made it appear that the eastern location had come out ahead in the cost categories. (See app. I for the results of the second data request.) The DOD Inspector General representative who participated in the steering group’s proceedings told us that he had questioned the weighting of individual cost elements and recommended to the group that costs be evaluated on a straight comparison basis between locations. Further, KPMG officials told us that they had also told the group repeatedly that this methodology was not comparing dollars equally. Individual steering group members we spoke with could not recall their rationale for a disproportionate ranking of dollars and did not understand the potential impact of their actions until they saw the results of the second data request. Specific criteria for evaluating work environment and commute time were not established because the steering group could not determine data sources for these two noncost criteria. Thus, steering group members subjectively determined point values for these criteria at each location. For work environment, members told us they considered everything from distance between the parking lot and the office to perceived lifestyles to ongoing working relationships. For commute time, because some members had been to each location only twice, their experience with commute time consisted of traveling between their hotel and the site. Some members argued that commute time was not really a valid criterion, because it was a matter of personal choice. The subjectivity of these responses and the inconsistency in what members considered made the determinations of questionable value. These two criteria represented 100 points, or half the points awarded in the noncost category and 10 percent of the total points. DLA’s site selection decision support model was reviewed and approved by its Principal Deputy Director who served as the selecting official, in consultation with DLA’s executive leadership team, before data collection efforts were initiated to ensure the objectivity of the process. However, after being briefed on the results of the second data request analysis, the selecting official requested changes in the analysis and asked for additional data. He stated that he did this to better ensure the comparability of data between the sites. This produced the requirement for a third data request. The selecting official decided that equal points should be awarded to each site for personnel costs, that real property maintenance costs be reassessed, and that changes should be made to requirements for furniture and space. The selecting official’s actions to negate the impact of one criterion and to have data reassessed after receiving the results of the analysis and without consulting the steering group created concerns among various steering group members about the perception of bias. The selecting official disagreed with the steering group’s use of personnel cost as a criterion. As a result, following a briefing on the second data request analysis, the selecting official, acting independently of the steering group, decided to eliminate personnel costs as a consideration. The selecting official reasoned that the grade structure of the new headquarters should be independent of the chosen site, making personnel costs irrelevant, so he gave equal points to both locations. (See app. II for the results of the third data request.) The steering group, however, had considered personnel cost to be an important criterion and had based it on average grade levels of the current structure at each location. The steering group reasoned that, although the employees of the new site would be downsized, the grade structure at the new headquarters would be similar to that of the region where it was located. DLA headquarters officials told us that in setting up the new distribution headquarters at New Cumberland, they expect to restructure and downgrade positions at that location to meet the requirements for the new organization. To what extent this would lessen the higher personnel costs for the eastern location is unclear given employee bumping rights and save pay provisions that would likely be associated with such restructuring. Also, the importance of personnel costs as a decision factor should not be minimized since savings in this area can mean the potential for significant recurring savings in the long term. The results of the second data request showed that the eastern location had higher average grade levels, resulting in a $3.1 million difference in personnel costs between the two locations over a 5-year period. (See app. I.) The selecting official had the facilities engineer responsible for the third data request reassess real property maintenance costs. The selecting official told us that he did this in the interest of obtaining more realistic data. For the second data request, the steering group used the Navy Public Works Center real property maintenance estimates used in the 1995 BRAC process. These estimates had been reviewed in 1995 by the DOD Inspector General, who found the procedures used to be reasonable and the cost estimates to be consistently generated, generally supported, and reasonably accurate. According to the facilities engineer responsible for the second data request, the data had also been used in DLA’s real property maintenance project development, budgeting, and execution processes. The DLA facilities engineer made minor changes to the Navy Public Works Center data as part of his data validation efforts before approving their use for the second data request analysis. The selecting official told us he believed that the data used in the second data request were not realistic, based on his personal knowledge of the conditions of the two sites, past experience in the distribution area, and knowledge of flaws in repair and maintenance data used for BRAC. He reasoned that the BRAC data were not comparable because the Navy Public Works Center had different people with different perceptions and evaluation criteria assess the individual sites. For example, he said the eastern location’s database included about $47,000 for the cost of painting a building with a dryvit exterior, which, according to the facilities engineer, does not need cyclic painting. However, the facilities engineer had already removed this item from the eastern location’s database during his validation of the second data request. A DLA facilities engineer, not previously involved in the process, reassessed real property maintenance costs for the third data request, producing a significant change in costs between the two locations. The results of the second data request had shown a difference of about $643,000 in the real property maintenance category in favor of the western location. The results of the third data request produced a difference of about $182,000 in favor of the eastern location. For the third data request, the facilities engineer went one step further than the previous engineer and had the sites submit justification for removing additional projects from the Navy Public Works Center database. The projects submitted for removal included some cyclical projects that the respondents did not believe were needed during the 5-year time frame covered by the analysis. The eastern location submitted and received approval for removing about $791,000 worth of projects. According to western location officials, they had one such project, valued at $95,000, but they did not submit it because their efforts to have it removed during the second data request were unsuccessful. Both facilities engineers said that if the western location had resubmitted this project for the third data request it would have been considered and may have been removed. The removal of this item alone would not have significantly impacted the cost or point spread in the final analysis of the third data request. However, the facilities engineers later identified an error of about $210,000 in the western location’s database for costs that KPMG agreed should have been excluded. The correction reducing the western location’s real property maintenance costs was not made in the final analysis because, according to KPMG, it was identified after the third data request analysis was completed. Although these reductions—the possible removal of the $95,000 project and the $210,000 correction—in the western location’s costs would have changed the dollar and point spread advantage for real property maintenance costs from the eastern site to the western site, they would not have been enough alone to effect the overall outcome of the study. (See app. II.) The selecting official requested revisions to the requirements for furniture and space. These changes significantly affected the relative position of the competing locations within the facilities and information technology cost category. The eastern location’s cost advantage in the investment cost category, which includes facilities and information technology costs, went from only about $19,000 in the second data request analysis to about $1.7 million in the third data request analysis. The facilities engineer who developed the third data request told us that the requirements were instituted to ensure a level playing field. However, some steering group members disagreed with the changes and told us that the requirements gave an advantage to the eastern location, which already had modular systems furniture required by the new data request. In developing the second data request, the steering group had voted to disregard the selecting official’s direction for the new headquarters to include modular systems furniture. While DLA officials told us that such furniture was used at two other newly renovated sites, various steering committee members told us they did not believe that an official standard requiring such furniture currently existed within DLA. Nonetheless, this became a requirement under the third data request, consistent with the selecting official’s earlier guidance. The facilities engineer who developed the data request required that the competing regions resubmit floor plans to include the systems furniture and stipulated that its life expectancy not exceed 10 years within the 5-year time frame covered by the analysis. As a result of these new requirements, the western location submitted a cost of about $901,000 for purchasing new systems furniture in the third data request because it could not verify the age of the systems furniture stored in its warehouse. Moreover, in addition to requiring systems furniture, the third data request included other new requirements such as a minimum of 22 meeting rooms and floor-to-ceiling walls for conference rooms and offices to meet what the facilities engineer described as an idealistic view of what DLA offices should look like. The engineer said that he used these requirements to ensure that both proposals would be based on comparable work space. Neither the steering group nor officials from the competing locations agreed with all of these requirements. For example, they protested the need for 22 meeting rooms, calling it excessive and wasteful. Officials at both locations told us that they currently had more people with fewer meeting rooms and had encountered no difficulties in doing their work. The changes made for the third data request analysis had the effect of improving the position of the eastern location, including shifting the cost advantage (on an absolute dollar basis) to the eastern location. However, it should be noted that these costs were largely one-time costs that could easily be offset over time should there be significant recurring savings in another cost area, such as personnel. Steering group members told us they had no role in the third data collection effort. They received the results of the second and third data requests on the same day, September 15, 1997. According to the steering group’s minutes, the group accepted the results of the third data request because the request was at the discretion and authority of the selecting official. However, they cautioned the selecting official that the process of the third data request would appear biased to outside parties, considering they had not been consulted regarding this phase. The results of the third data request analysis showed that the eastern location scored much better in both cost and point totals than the western location—the eastern location was the least costly by about $2.1 million. However, the results of the second data request showed the western location was the least costly by about $3.8 million. Again, because of problems identified in the process, we could not validate either set of data. Allegations had been made that DLA officials had selected the eastern site for the Defense Distribution Center before the site selection study took place. We found no evidence to validate concerns that the site selection decision was predetermined. Previous studies examined the consolidation issue but left the two regions intact. We found no evidence that the prior studies influenced the current site selection process or outcome. DLA officials told us they had considered consolidating their regional distribution headquarters for a number of years and had eliminated one of three regional headquarters in 1993. Subsequently, they had studied options for consolidating the two remaining regional headquarters; however, the study was not finalized and no proposals or recommendations were approved. The issue of consolidating the two regions had also been separately addressed as part of DLA’s BRAC deliberations in 1995. Even though DLA’s BRAC 1995 assessment emphasized the importance of retaining two regions, we learned that following BRAC 1995, DLA officials once again began revisiting the issue and began another study. A DLA official told us that the post-BRAC study was justified because the 1995 BRAC process had produced decisions to close six depots. (See fig. 1.) However, a DLA headquarters official told us that, although a preliminary assessment was made, this study was not completed and no report was issued. We were provided documents that various officials from the western location said raised concerns about whether the decision had been predetermined. However, we found no evidence to support that the information provided in these documents reflected the official position of DLA or influenced the current site selection process. For example, a 1995 briefing document from a previous study indicated a planned future staffing level of 387 at the eastern location and the phaseout of staff at the western location. According to DLA headquarters officials, the briefing document was preliminary and this study was not finalized. The selecting official and DLA officials associated with the most recent consolidation study told us that all the previous studies were outdated, given changes in DLA’s structure. Thus, the selecting official said that he did not consider them in the most recent study effort. Additionally, claims were made by DLA officials at the western location that actions were taken to better position the eastern location in the competition for the consolidated distribution center. These actions included DLA’s moving a general flag officer, its Defense Distribution Systems Center, and the DLA Operations Support Office to the eastern location in 1996. While this may have given the appearance that the eastern site was being preselected, we found no support indicating that this was considered in the site selection process. Moreover, DLA officials told us the flag officer would have moved to the western location if it had been the selected site. DLA’s efforts to establish a steering group and formulate decision-making criteria indicate that DLA recognized the need for a credible process to guide its decision-making. However, the process used by DLA to support the site selection for its consolidated distribution headquarters contained a number of weaknesses, and raised questions about the soundness of the decision-making process. The evaluation was completed without adequate information concerning facility requirements, which forced an assessment based on hypothetical costs; technical weaknesses further skewed the results. Subsequent changes to the process, requested by the selecting official, did not correct these weaknesses and created concerns about the perception of bias. Additionally, an incomplete assessment of personnel costs minimized opportunities to fully assess the potential for long-term recurring savings. Although various officials from the western location raised concerns about whether the decision had been predetermined, we found no evidence to validate that the information they provided to us reflected official DLA positions. Also, we found no evidence that prior studies examining the consolidation issue influenced the current site selection process or outcome. Because of the weaknesses in the process supporting DLA’s site selection decision and subsequent questions raised about the soundness of the decision-making process, we recommend that the Secretary of Defense independently and expeditiously reassess DLA’s site selection decision, taking into consideration issues and questions raised in this report. DOD provided written comments on a draft of this report, and they are included in their entirety in appendix III along with our evaluation of them. DOD nonconcurred with the report’s findings pertaining to (1) insufficient data on personnel and facilities requirements, (2) questionable cost comparison methodology, (3) subjective evaluation of two noncost criteria, and (4) selecting official’s requested changes affecting the analysis. DOD noted that DLA could have made the site selection decision unilaterally but chose to put a process in place that solicited input from the Office of the Secretary of Defense, the DOD Inspector General, DLA headquarters, and DLA regional experts. It further stated that DLA structured the evaluation process based on other successful models (including BRAC) and adjusted it to accommodate the special considerations felt to be important by representatives of the sites most impacted. We agree with DOD that sound and supportable decision-making processes are needed in making consolidation decisions. Our concern is that the process DLA decided to use was not well implemented. In particular, it contained weaknesses in methodology. The cumulative effect of these weaknesses raised questions about the soundness of the site selection process and the ultimate decision. We believe that the majority of issues raised in DOD’s response were already adequately addressed in our report and, accordingly, we made only minor modifications to the report regarding the requirement for systems furniture. DOD partially concurred with our recommendation. While disagreeing with the report’s findings, DOD nonetheless agreed with our recommendation that an expeditious review of the site selection decision should be done, taking into account issues and questions raised in this report. However, DOD did not set a time frame for doing so. Also, DOD did not specifically address that portion of our recommendation that stated that the Secretary of Defense should independently conduct the reassessment. We continue to believe it is important that an independent and expeditious assessment be made by the Secretary. To assess the soundness of the process DLA used to recommend and select a site for the Defense Distribution Center, we reviewed supporting documentation for the criteria, weights, and analysis used in the selection process. We interviewed all participants in the process. Participants included the Site Selection Steering Group—the steering group chair, an Air Force colonel in the DLA Logistics area; a DOD Inspector General representative; four representatives from DLA headquarters offices, four regional headquarters representatives, including two from each region; and two officials from the Office of the Secretary of Defense, including one from the Comptroller and one from the Logistics offices; KPMG Peat Marwick, LLP contractor personnel; DLA facility and installation officials involved in this process; the DLA selecting official, the Principal Deputy Director; the DLA Executive Director, Logistics Management; the Commander and Deputy Commander at Defense Distribution Region West and Defense Distribution Region East, respectively; and DLA officials from both Defense Distribution Region East and Defense Distribution Region West that responded to data requests. We visited both sites evaluated in the analysis and reviewed proposed floor plans. We traced and verified selected data inputs used to support DLA’s analysis to verify the reliability of selected DLA and KPMG data validation. We also reviewed documents from BRAC and other DLA consolidation studies, as available, to compare methodologies used. Documentation associated with studies other than the BRAC process was limited. To address the question of site selection predetermination, we interviewed DLA officials who had participated in or had knowledge of BRAC studies and DLA consolidation studies and reviewed documents relevant to these studies. We also interviewed participants in the Defense Distribution Center site selection process to determine whether they had prior knowledge of these studies. Additionally, to follow up on allegations of predetermination, we spoke to representatives of the DLA Council of American Federation of Government Employees union locals from both the eastern and western regions. Given the sensitive nature of this assignment, we met with senior DLA officials on two separate occasions to brief them on the results of our work and to solicit their comments on preliminary drafts of this report. We incorporated their comments, as appropriate, to enhance the technical accuracy and completeness of our report. We conducted our work from October 1997 to February 1998 in accordance with generally accepted government auditing standards. We are providing copies of this report to the Chairmen and Ranking Minority Members of the Senate Committees on Armed Services and on Appropriations; the House Committees on National Security and on Appropriations; Members of Congress of the affected congressional districts; the Director, Office of Management and Budget; the Secretary of Defense; and the Director, Defense Logistics Agency. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Barry W. Holman, James R. Reifsnyder, Kathleen M. Monahan, Jacqueline E. Snead, and Gary W. Ulrich. Both sites were given equal points for the cost of living because comparable data were not available, according to the steering group. For the third data request the facilities engineer changed the threshold for service orders to clarify definition problems. He requested that the data request respondents capture the costs only for the maintenance and repair projects greater than $2,000. The selecting official decided to give both sites equal points for personnel costs, because he reasoned that the costs would be identical after formation of the Defense Distribution Center, regardless of the site chosen. Both sites were given equal points for cost of living because comparable data were not available, according to the steering group. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated April 6, 1998. 1. We did not assume that each data request result would be implemented without any changes. Our point is that, while we would not expect the selected site to implement the floor plan exactly as submitted, we do believe the requirements should have been more fully defined and shared with the data request respondents. This was of particular importance in this study because investment costs were a major factor in the site selection criteria. Likewise, better clarity of personnel requirements by function could have led to better estimates of space requirements and cost. 2. We agree that the site selection steering group was given the responsibility to develop the criteria and weights for the decision support model and followed its established process to do so. Although one can assign different weights to costs as compared to a straight cost comparison, it is not a methodology that we have typically seen in such analyses, and as noted in our report, steering group members we spoke with could not recall their rationale for using this approach. Furthermore, both the Defense Logistics Agency’s (DLA) contractor and the DOD Inspector General advised against such a methodology. Varying weights can be assigned to evaluation criteria in this type of analysis to distinguish the relative importance of individual criteria, particularly when distinguishing between cost and noncost variables. However, assigning different weights to individual cost criteria, reduced DLA’s ability to perform the most meaningful comparison of cost. We saw no reason that costs should not have been evaluated on a straight comparison basis to provide a more accurate picture of costs. 3. On the basis of our discussions with steering group members and our review of the site selection backup documentation, we do not believe that the steering group had and used a valid basis for evaluating work environment and commute time. Each steering group member ranked each of the sites between 1 and 10 for quality of work environment and then commute time, with 10 being the most favorable score. Average scores were calculated and the highest average scores received the maximum points in the site selection analysis. The weakness in this method was that the basis for the ranking was not clearly established. Work environment and commute time were not clearly defined; had they been, the steering group might have identified objective measures for assessing these criteria. No data were used, and group members’ knowledge of commute times and working environment was limited. As a result some group members used commutes from nearby hotels and based working environment on personal working relationships. Alternatively, we note that another DLA site selection study, pertaining to the issue of consolidating cataloging functions, used quantitative factors to assess quality of life at work, including factors such as individual office space per person; average commute time measured in average number of miles traveled; availability of public transportation; types and numbers of amenities such as day care, gym, and credit union; parking fees; and distance to the airport. Clearly, the method used in the current study raised questions about the soundness of the method used to evaluate these two criteria. According to steering group meeting minutes, senior DLA leadership expressed concern about the subjective evaluation of these criteria. Additionally, while it is true that these criteria constituted only 10 percent of the total points assigned overall, they constituted 50 percent of the noncost criteria. 4. To what extent time requirements did not allow for consultation with the site selection steering group about the changes requested by the selecting official is unclear. The time constraint—making the site selection decision by the end of the fiscal year—appeared to have been self-imposed. Additionally, we agree that comparable data and information should have been used in the site selection process. However, to avoid questions about the objectivity of the evaluation, standards need to be clearly stated and agreed to up front. The selecting official had approved the site selection decision support model before data collection efforts were initiated to ensure the objectivity of the process. While it is within his authority and discretion to make changes, he did not do so until after he saw the results of the second data request analysis and did so without consulting the steering group. His actions did not correct the weaknesses we identified but resulted in negating personnel costs, reassessing real property, and establishing new facilities standards—the impact of which dramatically altered the resulting costs and point values in the site selection analysis. These actions make it difficult for us to be certain that DLA had the best comparable data it needed for its analysis. 5. We agree that locality pay was not relevant after Fort Belvoir was removed from the site selection process. As DOD stated, the locality pay was the same for both of the remaining sites. However, the steering group was correct in initially identifying personnel costs as an important criterion. The importance of personnel costs should not be minimized since savings in this area can mean the potential for significant recurring savings in the long term. As our report notes, the results of the second data request showed that the eastern location had higher average grade levels, resulting in a $3.1 million difference in personnel costs between the two locations over a 5-year period. While it may be difficult to project bumping rights along with voluntary early retirement and separation incentive pay, it can be done. For example, DLA officials planned to conduct a mock reduction-in-force to determine the effects on personnel, but had not yet done so. It should also be noted that, absent definitive data, DLA and other DOD components previously used standard factors in prior base closure rounds to project some personnel impacts and costs. 6. While we agree that the real property maintenance data should have been comparable, DLA’s site selection backup documentation indicated that the data were reviewed and some modifications were made to it to ensure comparability between the two competing locations before it was used in the second data request analysis. Our concern relates to the decision-making process. The DLA Chief of the Real Property Maintenance Team was approved by the steering group as the facilities engineer responsible for validating the data. He told us that he validated the data as a routine matter of prudent facilities engineering management. During his data validation of the responses to the second data request, he removed the requirement for the eastern site to repaint a nonpaintable exterior surface. Subsequently, the selecting official told us that he based his decision to reassess real property maintenance on his personal knowledge and experience. Having the data reassessed after they had already been validated raised concerns among various steering group members about the perception of bias. 7. DLA officials suggested that not requiring systems furniture and other facilities requirements would result in a substandard work environment and indicated that these requirements were used at two other DLA locations not part of this site selection process. We have modified our report to reflect DLA’s point about these other locations. However, as noted in our report, members of the steering group told us they did not perceive this as an official DLA standard for furniture and workspace. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the process used by the Defense Logistics Agency (DLA) to select the site for its new defense distribution center, focusing on whether: (1) the process was sound; and (2) there was any evidence that the site selection had been predetermined. GAO noted that: (1) DLA officials believed that the consolidation of its eastern and western regional distribution headquarters would produce savings; (2) DLA's establishment of a steering group and decisionmaking criteria indicate that DLA recognized the need for a credible process to guide its decisionmaking in selecting a site for its consolidated distribution headquarters; (3) however, the process used by DLA to support the site selection decision contained a number of weaknesses; (4) among the weaknesses in the process were the absence of sufficient information concerning personnel facilities requirements for the new center, unrealistic cost comparisons between the competing sites, and the use of subjective data for two noncost criteria; (5) subsequent changes to the process, made at the request of the selecting official, did not correct these weaknesses and created concerns about the perception of bias; (6) the cumulative effect of these weaknesses raised questions about the soundness of the site selection process and the ultimate decision; (7) although various persons from the western location raised concerns about whether the decision had been predetermined, GAO found no evidence to validate these concerns; and (8) likewise, GAO found no evidence that prior studies examining the consolidation issue influenced the current site selection process or outcome. |
Established by the Civil Rights Act of 1957, the Commission is a fact- finding agency required to report on civil rights issues. It is required to study the impact of federal civil rights laws and policies with regard to illegal discrimination or denial of equal protection of the laws. It must also submit at least one report annually to the President and the Congress that monitors federal civil rights enforcement efforts. Other reports may be required or issued as considered appropriate by the Commission, the President, or the Congress. The Commission serves as a national clearinghouse for information related to its mission. In addition, it investigates charges by individual citizens who claim to be deprived of their voting rights. The Commission may hold hearings and, within specific guidelines, issue subpoenas to obtain certain records and have witnesses appear at hearings. However, because it lacks enforcement powers that would enable it to apply remedies in individual cases, the Commission refers specific complaints to the appropriate federal, state, or local government agency for action. The Commission’s annual appropriation has averaged about $9 million since fiscal year 1995. It is currently directed by eight part-time Commissioners who serve 6-year terms on a staggered basis. Four Commissioners are appointed by the President, two by the President Pro Tempore of the Senate, and two by the Speaker of the House of Representatives. No more than four Commissioners can be of the same political party. With the concurrence of a majority of the Commission’s members, the President may also designate a Chairperson or Vice Chairperson from among the Commissioners. A Staff Director, who is appointed by the President with the concurrence of a majority of the Commissioners, oversees the daily operations of the Commission and manages the staff in six regional offices and the Washington, D.C., headquarters office. The Commission operates four units in its headquarters whose directors and managers report directly to the Staff Director: the Office of Civil Rights Evaluation, Office of General Counsel, Office of Management, and a Regional Programs Coordination Unit. As of June 2004, the Commission employed approximately 70 staff members, including the eight Commissioners and their eight assistants. The Commission also has 51 State Advisory Committees—the minimum required by statute—one for each state and the District of Columbia. The State Advisory Committees are composed of citizens familiar with local and state civil rights issues. Their members serve without compensation and assist the Commission with its fact-finding, investigative, and information dissemination functions. To encourage greater efficiency, effectiveness, and accountability in federal programs, the Congress passed the Government Performance and Results Act of 1993, which requires agencies to develop and issue certain documents to be made available to the public and used by congressional decision-makers. OMB provides guidance to federal agencies on complying with GPRA requirements through its Circular A-11, which is updated annually. In addition, we have published guidance and reports to federal agencies on best practices for complying with GPRA. Under GPRA or OMB guidance, agencies must submit the following three documents to the President, Congress, and OMB: Strategic plan. This document, which must cover a period of no less than 5 years from the fiscal year in which it is submitted, should be updated every 3 years and include the agency’s mission statement and long-term strategic goals. Under GPRA, strategic plans are the starting point and basic underpinning for results-oriented management. Strategic goals are long-term, outcome-oriented goals aimed at accomplishing the agency’s mission. In developing goals for their strategic plans, agencies are required to consult with the Congress and other stakeholders. Annual performance plan. This document sets forth the agency’s annual performance goals, which should be linked to its strategic goals. An agency’s annual goals provide the intermediary steps needed to reach its long-term strategic goals. Annual goals should be objective, quantifiable, and measurable. OMB guidance now directs agencies to include budget information in their performance plans and encourages agencies to align resources with annual goals. Prior to their submissions for fiscal year 2005, agencies were not directed to associate program costs in this way. Annual performance report. This document provides information on an agency’s actual performance for the previous fiscal year. This report should provide information on the results of its progress in meeting annual goals. If agencies have not met their goals, they are required to explain what issues are keeping them from meeting the goals and describe their plans for addressing these issues. Several federal agencies have oversight responsibilities in relation to the Commission, including OMB for financial management and OPM for personnel management. OMB, located within the Executive Office of the President, is responsible for preparing and implementing the President’s annual budget and for providing guidance to agencies on how to comply with GPRA. OPM is the central personnel management agency of the federal government charged with administering and enforcing federal civil service laws, regulations, and rules. OPM is also required to establish and maintain an oversight program to ensure that agencies comply with pertinent laws, regulations, and policies. Oversight can also be provided by an Inspector General. The Inspector General Act of 1978 provides for Offices of Inspector General to serve as independent, objective offices within certain federal departments or agencies to promote economy, efficiency, and effectiveness as well as prevent and detect fraud and abuse. Agencies that do not have their own Office of Inspector General can obtain Inspector General services from other federal agencies. The Commission has not updated or revised its strategic plan since 1997, as required under GPRA, and its most recent annual performance plan and report contain weaknesses that limit the Commission’s ability to effectively manage its operations and communicate its performance. The Commission has not updated or revised its strategic plan since fiscal year 1997 and has missed two scheduled submissions required under GPRA. According to GPRA and OMB guidance, the Commission should have submitted, updated, and revised its strategic plan in fiscal years 2000 and 2003. The 2003 revision should have covered the period through at least fiscal year 2008. Commission officials told us that the agency is working on developing an updated strategic plan and intends to submit it to OMB by Fall 2004. However, while they were in the process of revising their strategic plan as of June 2004, critical actions, such as consulting with the Congress as required by the act, have not yet occurred, according to Commission officials. Because it has not updated or revised its strategic plan, the Commission has not reexamined its strategic goals since 1997 to affirm their ongoing significance to the agency’s overall mission. The Commission has not determined if changes to its strategic goals are warranted due to factors such as external circumstances or not meeting its annual goals. In addition, because the Commission has not updated its strategic plan, its strategic goals also are not informed by a current analysis of the Commission’s purpose and work. Without revisiting its strategic goals, the Commission does not have a firm basis on which to develop its annual goals. The Commission continues to rely on strategic goals from 1997 to formulate its current annual goals. Without a current strategic plan the Commission also lacks a key tool for communicating with the Congress, the public, and its own staff, including informing them of the significance of its work. In addition to serving as a document for external use, the strategic plan can be used as a management tool and, according to OMB guidance, should outline the process for communicating goals and strategies throughout the agency and for assigning accountability to managers and staff for achievement of the agency’s goals. The Commission’s most recent performance plan, for fiscal year 2005, includes several program activities that are referred to as goals; however, it is unclear how these activities will help the agency achieve its strategic goals or accomplish its mission. For example, the plan lists 14 fact-finding projects, each of which has as many as 5 annual goals. Many of these goals, however, are activities, such as holding a public hearing or publishing a report. Similarly, one of the goals in the plan is for each of the Commission’s State Advisory Committees to focus on regular meetings in fiscal year 2005 and on completing their projects. However, this goal is not linked to achieving the agency’s strategic goal to enhance the Committees’ ability to monitor civil rights in the United States. In addition, the annual performance plan does not contain all elements required under GPRA. The plan does not provide information on how the Commission will pursue and accomplish the annual performance goals laid out in its plan. Performance plans must include descriptions of how an agency’s annual goals are to be achieved, including the resources and strategies required to meet the goals. However, the Commission’s fiscal year 2005 plan does not discuss the strategies or resources needed to achieve its goals. For example, according to the performance plan, the Commission will update its Civil Rights Directory, but the plan does not indicate which offices will be responsible or describe the strategies and resources needed to carry out this task. The Commission’s performance plan for fiscal year 2005 also does not include budgetary information in accordance with OMB guidance. Instead of associating the cost of its programs with specific annual goals, the plan includes a single amount for its total operations. The potential problems stemming from the Commission’s failure to associate costs with specific annual goals or break down its budget request by goal may be exacerbated by the large gap between the Commission’s budget requests and its actual appropriations. Since 1999, the Commission’s appropriations have averaged approximately 26 percent less than the amount requested. For fiscal year 2004, the Commission based its annual performance plan on a budget request of $15.2 million, but its appropriation for that year totaled only $9.1 million. In addition, the Commission has consistently not revised its annual performance plans to reflect its actual appropriations and illustrate the impact on its annual goals. Although agencies are not required to revise their plans to reflect actual appropriations under GPRA, the fact that the Commission’s plans are based upon a budget that is so much larger than its actual appropriations limits the plans’ usefulness in detailing how the agency will achieve its annual goals and in assessing the impact of appropriations decisions on its planned performance. Furthermore, the Commission’s annual performance plan for fiscal year 2005 does not provide the performance indicators to be used in measuring achievement of each annual goal. According to GPRA, an agency’s performance plan shall include performance indicators to be used in measuring or assessing the relevant outputs, service levels, and outcomes of each program activity, and provide a basis for comparing actual program results with annual goals. For some annual goals—particularly those related to promoting greater public awareness, assisting individuals in protecting their civil rights, and enhancing the capacity of the State Advisory Committees—the performance plan does not have any performance indicators. For example, the performance plan states that, in fiscal year 2005, the Commission will develop and implement a coordinated multimedia public service announcement campaign designed to educate the public about important civil rights matters and discourage discrimination while promoting tolerance. However, the plan does not describe measures that can be used to evaluate the attainment of this goal in terms of outputs, such as the number of public service announcements, or outcomes, such as increased awareness of civil rights matters. In the annual performance plan, the Commission does not adequately describe how it will verify and validate the performance measures used to assess the accomplishment of its annual goals. GPRA requires agencies to submit information on how they plan to verify and validate the performance measures used to assess the accomplishment of their annual goals. This requirement helps to ensure that their assessments are valid, reliable, and credible. The Commission’s fiscal year 2005 plan includes a general description of its verification and validation processes, but it does not specify the evaluation methods to be used or identify the limitations or constraints of these methods. For example, the plan states that, in assessing the outcomes achieved through issuance of its reports, the Commission may conduct follow-up meetings with affected agencies, congressional committees, and other interested organizations. However, the plan does not describe how these groups will be selected, the data to be collected, how the data will be assessed, or who will be responsible for conducting these meetings or collecting and assessing the data. Although the Commission’s most recent annual performance report, for fiscal year 2003, describes the agency’s achievements as well as reasons for not meeting certain goals, the report does not include several elements required under GPRA and provides little evidence and context for evaluating the agency’s performance. Furthermore, many of the results are descriptive narratives that do not characterize the Commission’s performance. Overall, these problems diminish the report’s usefulness as a tool for managing the Commission’s operations and holding the agency accountable for achieving its goals. The performance report for fiscal year 2003 is incomplete because it does not account for all of the annual goals in the Commission’s fiscal year 2003 performance plan—a fundamental GPRA requirement. The report provides no account of the Commission’s performance for many of the annual goals set forth in its fiscal year 2003 performance plan. In particular, the report does not account for the Commission’s performance for 6 fact-finding projects—a core activity of the agency. For example, the fiscal year 2003 plan stated that the Commission’s fact-finding project, “Media Role in Civil Rights,” would accomplish four goals, including having a public event, yet the performance report provides no account of this project or any description of the agency’s progress in meeting these goals. Furthermore, while the report includes results for other annual goals, the information provided for many of these goals is incomplete or ambiguous. For example, the Commission’s environmental justice project has three goals: publication of a report, report dissemination, and formal consideration of the recommendations of the report by affected agencies. However, although the performance report describes the purpose of the environmental justice report as well as its publication and dissemination, the performance report does not indicate whether the Commission obtained any formal response to its recommendations from affected agencies. Similarly, the performance plan stated that each State Advisory Committee Chairperson or Representative would participate in at least one civil rights activity per year. Although the performance report includes extensive narrative describing the work of the State Advisory Committees, it does not indicate whether this goal had been achieved. The Commission’s performance report also does not provide the relevant data needed to assess the achievement of its annual goals. Under GPRA, performance reports must include 3 years of actual performance data in describing the agency’s progress in achieving its goals. While the performance report includes 3 years of performance data for one goal from its fiscal year 2003 performance plan, for the remaining goals, the report does not include 3 years of data, or the data are not relevant for assessment. For example, the performance report includes data describing the type and number of complaints received in fiscal year 2003 and for the 3 prior years. However, the report does not include data—such as the amount of time it took to respond to complaints—that could be used to assess whether the Commission met its goal of responding in a timely manner. Moreover, the fiscal year 2003 performance report provides no plans, schedules, or recommendations for addressing each of the Commission’s unmet goals. GPRA states that, when an annual goal is not achieved, the agency must describe why, outline plans or schedules for achieving the goal, and if the goal was determined to be impractical, describe why that was the case and what action is recommended. While the report explains why some goals were not met, it does not provide plans, schedules, or recommendations for addressing these unmet goals. For example, the performance report states that, due to limited resources, the Commission was unable to track its referrals to federal enforcement agencies to ensure that civil rights complaints were received and appropriately processed. However, the report does not provide any detail on whether it would continue to pursue this goal, how the Commission plans to meet this goal in the future, or what actions could be taken to help the Commission meet this goal, such as obtaining assistance from other federal agencies in maintaining accessible and relevant records. In recent years, OMB and OPM have provided budgetary and human capital management oversight for the Commission. OMB’s oversight of the Commission focuses primarily on the budgetary process. In providing oversight of its human capital management, OPM conducted reviews and made recommendations to the Commission in the 1990s to improve the Commission’s human capital management and overall management. In response, although the Commission implemented some of the recommended changes, many issues that OPM raised in 1996 continued to be of concern in 1999. Although an Inspector General can provide an additional means of oversight for agencies and independent commissions, the Commission does not have an Inspector General and is not required to have one. OMB’s oversight of the Commission is primarily budgetary, according to OMB officials. In the fall of each fiscal year, OMB is responsible for reviewing the Commission’s annual performance plan, budget request, apportionment request, and annual performance report. Before the Commission’s budget request is due, OMB provides the Commission with guidance and updated information on the submission of GPRA documents. With regard to the annual performance plan, OMB generally reviews the long-term goals and performance measures used to determine the Commission’s performance in meeting its goals. OMB also reviews the Commission’s budget request as part of its role in developing the President’s budget. While OMB reviews the Commission’s annual performance plans and budget requests, according to OMB officials, it does not approve or reject these documents, but acknowledges their receipt and sends comments back to the agency as appropriate. However, Commission officials said that OMB has not provided feedback on its annual performance plans in recent years. Each fall, OMB also receives the Commission’s apportionment request, which describes how the Commission would like its appropriations distributed. According to OMB officials, once an apportionment agreement has been reached between the Commission and OMB, the Commission sends this agreement to the Treasury, which issues a warrant to release funds to the agency. Finally, OMB reviews the Commission’s annual performance report to ensure that its funds are spent according to its performance plans and that its goals were met. In addition to reviewing the Commission’s annual budget submissions, OMB reviewed and approved the Commission’s February 2004 request to reduce its personnel costs by offering voluntary separation incentive payments, or “buyouts,” to encourage staff in certain job classifications to voluntarily leave their jobs. The Commission requested authority to offer buyouts to six employees. OMB officials discussed this request with Commission officials and approved the request in April. The Commission offered buyouts to all employees who had 3 or more years of government service in several job classifications. The Commission granted buyouts to three staff members, who accepted. OMB also is responsible for providing oversight of agencies’ management, including the Commission, but this oversight has been limited because of the small size of the agency and its budget, according to OMB officials. OMB officials told us that the agency does not provide the same level of oversight for organizations with small budgets and staff, such as the Commission, as that provided for larger organizations, such as the Securities and Exchange Commission. For example, even though the Commission does not have a current strategic plan, OMB has not requested an updated plan from the Commission, according to Commission officials. In addition, OMB officials told us that they have taken no actions in response to our October 2003 findings that the Commission violated federal procurement regulations and lacked key management practices because the volume of purchasing by the Commission is far below the levels that concern OMB. For example, the Commission’s largest contract is for less than $160,000. According to OPM officials, OPM provides the Commission with human capital oversight through its audits of agencies’ human capital management systems, which can be conducted on a cyclical basis every 4 to 5 years or on request, as needed. In 1996 and 1999, OPM conducted two reviews of the Commission’s human capital management systems and made recommendations in each report for improvements. In analyzing the Commission’s response to OPM reviews, we focused on six recommendations from OPM’s 1999 report that involved systemic changes to the Commission’s human capital management systems. As of August 2004, the Commission had not implemented five of these six recommendations. Findings from these reviews included the following: In its November 1996 report, OPM’s main finding was that the Commission was an agency “badly in need of managerial attention,” citing the Commission’s poor documentation practices, lack of credible grievance and performance management systems, and employees’ highly negative perceptions of the Commission’s organizational climate. In its October 1999 report, OPM found that, although the Commission’s human capital management systems complied with Merit System Principles, its human resource practices continued to have weaknesses associated with accountability, delegation, recruitment, performance appraisals, and incentive awards. The report noted that these concerns were similar to the concerns OPM had identified in the earlier report. For example, as of 1999, the Commission had not established an internal self-assessment program as OPM recommended in 1996. OPM made 16 recommendations in 1999 to help the Commission improve its management of human resources. As of August 2004, we found that the Commission had not implemented five of six broader, systemic recommendations made by OPM. (See appendix II for descriptions of these six OPM recommendations and the Commission’s responses.) Although in the course of their reviews OMB, OPM, and GAO have identified continuing management and accountability problems at the Commission, it may not be sufficient to resolve such longstanding concerns through annual budgetary reviews and management reviews based on congressional requests or periodic audit cycles. An Inspector General can provide an additional means of oversight for federal agencies, including independent commissions and boards, but the Commission currently has no such oversight. Several small agencies have obtained such services for audits and investigations through memorandums of understanding with the General Services Administration. However, the Commission does not have an Inspector General of its own, nor does it obtain these services from another agency. The Staff Director told us that, although he has thought about the possibility of obtaining these services, he does not believe the Commission has the funds needed to obtain the services of an Inspector General. Over the past decade, we reviewed the Commission’s travel, management, and financial practices and made recommendations for improvement. The Commission took some actions in response to the recommendations in our 1994 and 1997 reports. In addition, the Commission has not implemented three of the four recommendations in our October 2003 report. This most recent report included several recommendations to improve the Commission’s management and procurement practices. The Staff Director issued a letter in June 2004 in response to this report disagreeing with most of the recommendations and describing the actions taken by the agency. We also interviewed Commission officials to clarify their responses to the recommendations in our October 2003 report. Although the Commission took various actions to address the recommendations in our 1994 and 1997 reports, many similar problems persist. In 1994, we reported on problems identified in the Commission’s handling of travel activities for specific individuals and made recommendations for improvement. For example, in response to our finding that Commissioners had not submitted travel vouchers in a timely manner, we recommended that the Commission direct the Commissioners to do so, as required by federal travel regulations. In 1995, the Commission issued revised travelg procedures that incorporated our recommendation for timely filing of travel vouchers by the Commissioners. (As part of a separate assignment, we are currently reviewing the Commission’s fiscal year 2003 financial transactions that include a review of travel-related transactions.) In 1997, we found numerous operational issues, reporting that the management of the Commission’s operations lacked control and coordination; its projects lacked sufficient documentation; senior officials were unaware of how Commission funds were used and lacked control over key management functions; and records had been lost, misplaced, or were nonexistent. In the report, we made recommendations for specific changes to the Commission’s administrative procedures and project management systems, and the agency took some actions in response. However, in 2003, we found that the actions taken did not fully address the problems identified in our 1997 report. In October 2003, we reported that, although the Commission had made some improvements in its project management procedures for Commissioners and staff, the procedures lacked certain key elements of good project management, such as providing Commissioners with project cost information and opportunities to contribute to Commission reports before they are issued. We also reported that the Commission lacked sufficient management control over its contracting procedures and that little, if any, external oversight of the Commission’s financial activities had taken place, since no independent accounting firm had audited the Commission’s financial statements in at least 12 years. To address these issues, we recommended that the Commission 1. monitor the adequacy and timeliness of project cost information 2. adopt procedures that provide for increased Commissioner involvement in project implementation and report preparation, 3. establish greater controls over its contracting activities in order to be in compliance with the Federal Acquisition Regulation, and 4. take immediate steps to meet the financial statement preparation and audit requirements of the Accountability of Tax Dollars Act of 2002 for fiscal year 2004. The Staff Director generally disagreed with these recommendations, and the Commission has not adopted three of them. In their June 2004 letter responding to our report recommendations, Commission officials asserted that the first two recommendations were a matter of internal policy to be decided by the Commissioners. In addition, they disagreed with the need for the third recommendation and asserted that they were taking steps to address the last recommendation. Although they disagreed with the third recommendation, the Commission hired a contracting and procurement specialist starting in December 2003 to provide supplemental services, and the Staff Director acknowledged that the Commission could improve in this area. As of September 16, 2004, the Commission had yet to contract with an independent auditor to prepare for meeting the requirements of the Accountability of Tax Dollars Act of 2002. (See appendix III for further details on the Commission’s responses to these recommendations.) With its history of management problems, the Commission faces significant challenges. Strategic planning is not a static or occasional event. Instead, it is a dynamic and inclusive process that, if done well, is integral to an organization’s entire operations. By not devoting the time and resources required to update its strategic plan, the Commission has no assurance that it is pursuing long-term goals that reflect the needs of its key stakeholders and that address the many management challenges presented by the shifting external and internal environments in which it operates. Furthermore, the Commission lacks a foundation to use in aligning its daily activities, operations, and resources to support its mission and achieve its goals. Without using the GPRA planning process to periodically reexamine its long-term goals and set its course, the Commission is not in a strong position to set relevant annual goals or develop measures for assessing whether it has achieved them. Given the consistent shortfall between the Commission’s annual budget requests and its appropriations over the past decade, it is even more important for the Commission to chart a strategic course that is realistic. Although the Commission has improved some policies and practices in response to recommendations from OPM and GAO, the problems that remain are still cause for concern, particularly given the lingering nature of the Commission’s management difficulties. Unless the Commission systematically monitors its implementation of OPM’s and GAO’s recommendations, it is not likely that it will significantly improve its management and human capital management systems. Finally, annual budgetary and other reviews based on periodic cycles or specific requests may not be sufficient to address longstanding concerns about the Commission’s management and accountability. Because the Commission does not have an Inspector General, it does not appear likely that it will have the additional independent oversight needed to address management problems that others have identified and to hold itself accountable for resolving them. To strengthen the Commission’s accountability, the Congress should consider legislation directing the Commission to obtain the services of an existing Inspector General at another agency. To strengthen the Commission’s management practices, we recommend that the Commission update its 5-year strategic plan according to GPRA’s required schedule and include all elements required under GPRA and OMB guidance; ensure that future annual performance plans include all elements required under GPRA and OMB guidance, reflect funding levels requested in the President’s Budget, and are revised if necessary to reflect actual appropriations; ensure that annual performance reports include all elements required implement all of the recommendations in OPM’s and GAO’s previous reports; include the status of the Commission’s efforts to implement OPM’s and GAO’s recommendations in its GPRA plans and reports; and seek the services of an existing Inspector General from another agency to help keep the Commission and the Congress informed of problems and deficiencies and to conduct and supervise necessary audits and investigations. We provided a draft of this report to the Commission for comment. The Commission’s formal comments and our responses are contained in appendix I. In responding to our draft report, the Commission did not comment on our recommendations and disagreed with most of our findings and conclusions. We have carefully reviewed the Commission’s concerns and overall do not agree with its comments on our findings and conclusions. For example, the Commission disagreed with our GPRA findings, asserting that its GPRA processes were appropriate and sound for an agency of its size. The Commission also asserted that, as a small agency, it was not cost- effective or efficient for it to institute its own accountability system for managing its human resources, as OPM had recommended. The Commission similarly cited its small size in asserting that it would be an “extreme” challenge to institute our October 2003 report recommendations. We disagree with these assertions. The Commission’s size is not relevant here: Size does not mitigate the need for the Commission to address longstanding management and human capital problems identified in previous OPM and GAO reports. Furthermore, instead of implying that it is acceptable for the Commission as a small agency to operate under diminished expectations for GPRA compliance, the Commission could make use of GPRA’s planning and reporting framework to strengthen itself as an agency. For example, the Commission could use GPRA's planning framework to update and sharpen its goals, clearly identify the strategies and resources needed to achieve those goals, and improve its management and human capital practices. The Commission could then also use GPRA’s reporting framework to demonstrate the progress it has made towards achieving those goals. In addition to providing these comments, the Commission criticized our approach to our work, asserting that the draft report contained inaccurate and incomplete analyses and that we rushed to complete the report within an artificially constrained timeline. We strongly disagree. At all times, we scoped, designed, and conducted this engagement in accordance with applicable professional standards and our quality assurance requirements. Furthermore, many of the Commission’s comments about how we conducted our work were themselves misleading and inaccurate. For example, we did not suddenly and drastically change our focus, as the Commission asserted. In our May 2004 entrance conference with the agency, we noted our specific focus on certain areas, including the Commission’s GPRA products and the agency’s actions in response to OPM and GAO recommendations. Our focus on oversight of the Commission and GPRA requirements remained consistent throughout the assignment. As we designed our work, we formulated our objectives and methodologies more specifically, and we shared our refocused objectives with the Commission when we completed the design phase of our work in July. We therefore continue to believe that our findings, conclusions, and recommendations are sound. The Commission’s detailed comments and our responses to them are reproduced in appendix I. We incorporated clarifications in the report as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the U.S. Commission on Civil Rights and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me or Revae Moran on (202) 512-7215 if you or your staffs have any questions about this report. Other contacts and staff acknowledgments are listed in appendix III. In general, the U. S. Commission on Civil Rights’ comments on our findings make four broad assertions in addition to having numerous specific points of disagreement with our findings. We address both in the following sections. The four broad assertions in the Commission’s comments on our draft report, as well as our responses to these assertions, are summarized below. The Commission asserted that we rushed to complete the report within an artificially constrained timeline and did not take the time to conduct thorough fact-finding or analyses to ensure a quality report. We disagree strongly with this assertion. To the contrary, we scoped, designed, and conducted this engagement in accordance with applicable professional standards and our quality assurance requirements. To further ensure the quality of our work, we included an initial design period, in which we built upon our considerable knowledge of the Commission from previous GAO reports and obtained further information as needed. In designing and conducting our work, we also consulted with our internal experts on GPRA and other issues. Far from rushing through the engagement, we in fact extended our design period so that we could perform high-quality work within a timeframe useful to our congressional requesters. At the end of this design period in July, we narrowed our scope, deferring a potential objective on the organizational structure of the Commission. Refining the scope of an engagement following a design phase is not an unusual audit practice. By agreement with our requesters, we did not include work on the Commission’s organizational structure not because of any arbitrary decision, as the Commission alleges, but rather to enable us to complete our work on time and in accordance with our quality standards. Our focus on GPRA, oversight, and the Commission’s response to our October 2003 report recommendations remained consistent throughout the assignment, from initial notification to report drafting. Furthermore, in our May 2004 entrance conference with the agency, we noted our focus on these areas, and in July 2004, at the end of our design phase, we shared these objectives and our approach with Commission staff in an interview. The Commission asserted that we did not follow up with its staff as needed to obtain answers to questions and that they expected us to interview more people, both staff and Commissioners, in the course of our work. We disagree. Our methodology called for analyzing Commission documents, pertinent legislation and guidance, and various reports by OPM and GAO. Although at the beginning of our project, we envisioned interviewing key managers and all of the Commissioners as part of possible work on the Commission’s organizational structure, these interviews became unnecessary when we decided to focus on the Commission’s GPRA plans and reports, oversight of the Commission, and the agency’s response to our previous report recommendations. As noted in our report, to obtain information for these objectives, we conducted interviews with the Staff Director, Special Assistant to the Staff Director, Human Resources Director, and Budget Chief. We also followed up with e- mails and telephone calls as needed. Finally, in our exit conference, we presented all of our findings and provided an opportunity for Commission staff to comment on our findings and provide technical corrections. At that meeting, Commission officials provided a few comments, but no technical corrections; with few exceptions, they did not disagree with the facts or conclusions we presented. The Commission asserted that it cooperated with us fully, at all times. However, while our working relationships were professional and Commission officials were usually responsive in providing documents as requested, we do not agree that Commission staff cooperated fully with us throughout our work. Obtaining interviews with the Commission and key staff was frequently difficult, with each one requiring a minimum of 3 weeks to schedule. For example, although we notified the Commission on April 19, 2004, about our planned work and called shortly thereafter to schedule an initial meeting, it took numerous calls to set up our entrance conference on May 20, 2004. This delay in scheduling the initial meeting occurred despite our reference to the need for a rapid response. In addition, since it was difficult for Commission officials to find the time to meet with us, we combined our entrance conference with that of another GAO team that was examining the Commission’s financial transactions so that they would not also experience a delay in starting their work. We had similar difficulties scheduling other meetings as well. The Commission has repeatedly asserted that it is a small agency, with a budget of approximately $9 million and fewer than 70 staff— information that we noted in the draft report. In commenting on the draft report, the Commission asserted that its GPRA and personnel processes were appropriate and sound for an agency of its size. The Commission also cited its size in asserting that it would be an “extreme” challenge to institute our October 2003 recommendations. We disagree with these assertions. The Commission’s size is not relevant here: Size does not mitigate the need for the Commission to address longstanding management and human capital problems identified in previous OPM and GAO reports. Furthermore, the Commission could make use of GPRA’s planning and reporting framework to strengthen itself as an agency. For example, the Commission could use GPRA's planning framework to update and sharpen its goals, clearly identify the strategies and resources needed to achieve those goals, and improve its management and human capital practices, as recommended. The Commission could then also use GPRA’s reporting framework to demonstrate the progress it has made towards achieving those goals. In addition to making these broad comments, the Commission disagreed with our findings more specifically. Our detailed responses to the Commission’s comments follow. 1. We disagree that the Commission has implemented or was “engaged in implementing” all of the recommendations in our October 2003 report. See responses 2 through 6. 2. Although the Commission reported that it has taken various steps to meet the financial statement preparation and audit requirements of the act, we disagree that it is in a position to meet the act’s requirements for financial statement preparation and audit this year. As of September 16, 2004, the Commission had not hired an independent auditor to conduct this work. The agency’s ability to meet the act’s requirements in the less than 2 remaining months is highly doubtful, since the agency has not had its fiscal activities independently audited in more than 12 years, and no audit work has begun. 3. The Commission’s assertion is not accurate: The Commission has not implemented our recommendation to provide for increased commissioner involvement in project implementation and report preparation. In fact, as the Commission noted in its comments, the Commission has “continued to follow longstanding Commission policy on Commissioner-staff interaction.” As we reported in October 2003, this policy does not provide for systematic Commissioner input throughout projects. Nothing precludes the Staff Director from providing additional information about projects and report status to Commissioners as a matter of good project management and quality assurance. Furthermore, the Staff Director could respond in various ways to Commissioners’ concerns for increased input without obtaining a formal vote to change the Commission’s procedures. For example, Commissioners could receive a summary of preliminary facts and findings or an outline of a planned report. 4. The Commission asserted that there is no requirement that the Commissioners’ judgment must be substituted with our judgment on policy decision matters. While our recommendations are not requirements, we provide recommendations in our reports in accordance with our statutory responsibilities to investigate the use of public money, analyze agency expenditures, and evaluate the results of federal programs and activities. Our reports and recommendations provide agencies with the information necessary to improve their mission performance and Congress with the information necessary for oversight, including the development of legislation that will help agencies in their efforts. As the administrative head of the Commission, the Staff Director is authorized to make administrative changes that are consistent with the law and Commission policies. While it may be difficult at times to distinguish between an administrative and policy matter, we are not aware of any Commission policies that would prevent the Staff Director from implementing our recommendations nor would it be contrary to the law. 5. The Commission asserted that it has provided the Commissioners with project cost information and made efforts to monitor the adequacy and timeliness of the information given, as recommended in our October 2003 report. According to Commission staff, the agency provides cost information on a quarterly basis to Commissioners and has done so since the last quarter of fiscal year 2003. However, we continue to believe that having regular project cost reports, such as monthly reports, would enhance the Commissioners’ ability to plan for and monitor projects during their monthly meetings. Monthly reports would allow greater accountability for the projects by integrating cost information in a timely manner into project management. Since the Commission’s Budget Chief told us that he prepares monthly project cost reports for the Staff Director, preparing reports for the Commissioners’ monthly meetings should not be unduly burdensome. To ensure that cost reports can be best used to strengthen project monitoring and management, these reports should also be provided to Commissioners shortly after the month ends. 6. We believe that it is a deficiency for the Commission to provide quarterly cost reports to Commissioners 3 months after a quarter has ended. Our October 2003 recommendation called for the Commission to monitor the adequacy and timeliness of project cost information provided to Commissioners. In our view, information provided in 3-month-old project cost reports cannot be considered timely. 7. We believe that it is a deficiency for project cost reports to omit mention of the status of planned projects. The Commission’s second quarter 2004 cost report for the Commissioners did not indicate the status of 4 of the 12 projects. To be useful for decision making and monitoring, the project cost report should have noted that some planned projects had already been completed and that some work had not yet begun, so that the Commissioners monitoring the projects would have also been aware of the status of these projects. 8. We disagree that our conclusions on the Commission’s response to OPM’s 1999 recommendations are inaccurate and incomplete. See responses 9 through 16 as well as our response to broad assertions in the Commission’s comments. 9. The Commission asserted that we did not indicate concerns about its implementation of OPM’s 1999 recommendations until officials received the draft report. However, it is not clear to us why Commission officials should have been unaware of the direction of our findings. Four Commission officials, including the Human Resources Director, participated in two major meetings, one of which focused extensively on the Commission’s actions in response to six of OPM’s human resources recommendations. The Human Resources Director also participated in our exit conference in which we summarized our findings, including our finding on the Commission’s responses to oversight by OPM. We noted explicitly during this meeting that we had focused on certain recommendations and had found that the actions taken by the Commission were limited. The Human Resources Director did not provide any corrections or technical comments on the agency’s human resources practices during this meeting, nor did any other Commission official. In addition to these meetings, we obtained additional documents, comments, and answers to questions by e-mail from the Commission during the course of our work and incorporated this information into our draft report as appropriate. 10. The Commission asserts that we did not dispute that it complied with 11 of OPM’s 1999 recommendations. This statement is incorrect. We selected 6 of OPM’s 16 recommendations for analysis; we did not analyze the Commission’s response to the remaining recommendations. We noted in our report that the Commission had implemented 1 of the 6 recommendations that we analyzed. We have clarified our methodology in the final report. See comment 11 for more information on our methodology. 11. The Commission asserted that we arbitrarily decided to focus on 6 of the 16 recommendations OPM made in 1999. We strongly disagree. To follow up on the Commission’s response to OPM’s recommendations, we judgmentally selected six recommendations that had broader, more systemic implications for the agency. At no point, however, did we pre-select these recommendations in order to emphasize a particular outcome or “cast the agency in a bad light.” We have clarified the basis for our selection of these recommendations in the final report. 12. We do not agree that the Commission has implemented OPM’s recommendation to delegate HRM authority to managers. OPM’s report stated that “the Staff Director retains final approving authority for most decisions, including appointments, promotions, and performance ratings.” Policies described in OPM’s 1999 report remain in place at the Commission today. For example, in our interviews this year, Commission officials told us that the Staff Director must approve all hiring and promotion decisions as well as managers’ evaluations of employees. As of February 2004, according to the Commission’s administrative manual, “the staff director retains approval authority” as well for quality step increases, accomplishment awards, performance awards for its employees, and recommendations to OPM for other awards. While the Commission has taken certain actions to improve its human resources management practices, such as developing an employee handbook on human resources matters and providing managers with OPM’s HRM Accountability System Development Guide, the Commission has not delegated human resource authorities to managers in all program areas, as OPM had recommended. 13. The Commission’s assertion that it was acting in accordance with Merit System Principles is misleading and largely irrelevant for this discussion. OPM’s recommendation for an accountability system stemmed from its analysis of the Commission’s human resource accountability and internal self-assessment efforts (an area of weakness also identified in its 1996 review). In 1999, OPM found that the Commission had not “developed an effective system to hold managers accountable for HRM-related decisions.” OPM further noted that the “Staff Director retains final approving authority for most decisions… leaving managers uncertain about their own accountability when making these HRM decisions . . . Employees see this lack of accountability too, saying that the supervisory chain of command is unclear and that they are unsure of where work assignments and agency work priorities originate. Also, employees report that their jobs do not make good use of their skills and abilities; that they are not satisfied with their jobs; and that they do not feel free to disclose waste, fraud, and abuse without fear of reprisal.” OPM noted that an internal self- assessment program was “urgently needed to assure accountability.” References to the Commission’s delegated examining authority and general compliance with OPM’s Merit System Principles are not pertinent to the finding that led to this recommendation. 14. The Commission is inaccurate in asserting that OPM “may have made a recommendation for additional improvement of the Commission’s personnel processes.” As we noted in our report, OPM made 16 recommendations for improvement of the Commission’s human resource management practices in its 1999 report. The Commission is also inaccurate in asserting that OPM found that “those basic processes were already sound,” and that “there was no real need to implement this recommendation [on using OPM’s HRM Accountability System Guide], since Commission operations were good.” This is an inaccurate reading of OPM’s 1999 findings and recommendations. While OPM found overall that the Commission’s human resource program complied with the Merit System Principles, OPM also urged the Commission to consider its recommendations in five broad areas of human resource management. In the executive summary of the 1999 report, the first of 12 bullets highlighting OPM’s findings summarizes concerns raised in its earlier 1996 report on the Commission; acknowledges that the Commission “has improved its administration of the HRM program, particularly in the recruitment and placement area”; and continues by saying, “However, the other concerns we identified in 1996 continue to require attention.” Of the 11 remaining bulleted findings in OPM’s summary, 7 describe problem areas, 3 are positive, and 1 is mixed. 15. The Commission asserts that using OPM’s guide instead of designing its own system is appropriate because it is a small agency. However, the Commission’s assertions reflect a misunderstanding of OPM’s guidance to agencies and of what constitutes an accountability system for human resources. According to OPM, an HRM accountability system is a process and should be seen as a continuous cycle. This systemic, continuous process “enables an agency to identify, collect, and use the information or data on which accountability is ultimately based.” It includes identifying the agency’s strategic goals, including human resource goals; developing performance measures and a baseline to assess whether human resource goals are being met; and using this information to make improvements. The accountability process also requires cyclical, periodic reassessment. The Commission has taken various actions to improve its human resources management, including updating several administrative instructions, conducting an employee survey in fiscal year 2000, and developing an employee handbook. However, the Commission has not developed an accountability system—an ongoing process involving goal setting, evaluation, improvements, and reassessment—to address the concerns raised in OPM’s report. 16. We cannot agree that goals established in 1997 address and implement OPM’s 1999 recommendation, nor do we agree that our referring to the Commission’s strategic plan in this discussion is unfair. The Commission’s strategic plan was developed in 1997 and remains the Commission’s only strategic plan. In 1999, OPM recommended that the Commission’s strategic plan include human resources elements that OPM did not find in the Commission’s 1997 plan. In examining goal six in the Commission’s 1997 strategic plan, OPM “did not find that a link between HRM and agency mission accomplishment has been made apparent in the Strategic Plan. Further, the Strategic Plan does not list specific HRM goals and measures that could be used to assess the HRM function’s ability to effectively and efficiently support agency mission accomplishment. We found no evidence that key measures and/or outcome indicators are used by to track its efforts to achieve HRM goals.” The Commission’s assertion that the 1997 strategic plan contains human resources goals, measures, and indicators is therefore neither accurate nor relevant: The 1997 plan does not include human resources measures and indicators and it was not part of the Commission’s response to OPM’s recommendation because it was developed 2 years before OPM made this recommendation. 17. The Commission’s statement that we examined its GPRA processes is incorrect, and its description of the processes it used in 1997 to develop its strategic plan and first performance plan and report are irrelevant. Our objective was to assess the Commission’s compliance with GPRA’s requirements for agency strategic plans, annual performance plans, and annual performance reports. We did not focus on the agency’s process for developing these GPRA plans and reports, nor did we analyze the Commission’s initial performance plan for fiscal year 1999 or its initial performance report for fiscal year 1999. As noted in our report, we analyzed the Commission’s most recent performance plan (for fiscal year 2005) and the most recent performance report (for fiscal year 2003). We also compared the Commission’s plan for fiscal year 2003 to its performance report for the same year. 18. The Commission is incorrect in asserting that its descriptions of completed studies in its performance report provide information equivalent to performance indicators and that its plans and reports, by implication, comply with GPRA standards. Under GPRA, a performance indicator means a particular value or characteristic used to measure output or outcome. A narrative description of a report’s findings cannot be used for measurement purposes. See comment 19 as well. 19. The assertion that the Commission can use non-quantifiable measures in its reports is misleading. GPRA allows the Director of OMB to authorize the use of alternative, nonquantifiable performance goals for annual performance plans if necessary. However, Commission officials explicitly told us that the agency did not apply for or receive authorization from OMB to submit goals in their annual performance plan in an alternative, nonquantifiable format. Agencies that are authorized to use alternative formats must comply with certain other requirements, which the Commission has not done. 20. The Commission has not received an exemption from GPRA reporting requirements. Although agencies with annual outlays of $20 million or less are eligible to apply to OMB for an exemption, as the Commission notes, Commission officials told us that the agency has not applied for nor received such an exemption. 21. Although the Commission has filed annual performance plans and annual reports each year, as required under GPRA, it has not revised and updated its strategic plan, which is also required under GPRA. Furthermore, we cannot agree that the Commission’s plans and reports comply with “material requirements” of GPRA because of the numerous shortcomings in these products, as described in our report. 22. As noted in our report, according to OMB officials, OMB conducts primarily budgetary reviews and does not provide agencies that have small budgets and staff, such as the Commission, with the same level of scrutiny that it provides to larger agencies. OMB officials further told us that OMB does not approve or reject agencies’ GPRA plans and reports, but provides comments as appropriate. Because of OMB’s focus on budgetary reviews and on larger agencies, the absence of criticism from OMB does not necessarily constitute approval of an agency’s GPRA plans and reports. 23. Contrary to the Commission’s assertion, GPRA does require agencies to update and revise its strategic plan at least every 3 years. The Commission has not updated and revised its strategic plan since 1997 when it should have done so in fiscal year 2000 and again in fiscal year 2003. The Commission further asserts that its 1997 plan does not need updating or revision because its authorizing statute has not changed in the interim. This assumption is incorrect and demonstrates a misunderstanding of GPRA’s purposes and requirements. As noted in our report, strategic planning is not a static or occasional event. If done well, it is dynamic, continuous, and results-oriented, and it provides the foundation for everything the organization does. Of the 16 recommendations that the Office of Personnel Management (OPM) made to the Commission in 1999, we judgmentally selected 6 recommendations that had broader, more systemic implications for the agency. We did not analyze the Commission’s response to the 10 remaining recommendations. OPM Recommendation: Include human resources goals, measures, and indicators in the Commission’s Strategic Plan and involve Commission staff in the human resource planning and measurement process. The Commission has not addressed this recommendation. Because the Commission has not updated its strategic plan, it has not included additional human capital goals and assessment measures. In addition, although the Commission issued a Human Resources Plan in fiscal year 2000 that contains five human capital performance goals, the plan does not link these goals to its overall strategic goals, set forth a timeframe for achieving them, or describe how it will assess its progress. The plan also does not describe how Commission staff will participate in human resource planning and evaluation, as OPM recommended. OPM Recommendation: Use OPM’s Human Resource Management Accountability System Development Guide as a framework for creating an accountability system that will ensure that the Commission’s employees are used efficiently and effectively and that personnel actions are taken in accordance with Merit Systems Principles in support of agency mission accomplishment. Although Commission officials reported that they have developed and implemented an accountability system, we found little evidence to support this claim. OPM recommended that the Commission use its Human Resource Management Accountability System Development Guide as a framework for creating an accountability system. The Commission’s fiscal year 2000 annual performance report noted that their managers were provided copies of the Accountability Guide for review and that the Commission planned to adopt or modify some of its procedures and recommendations. According to Commission officials, they used the Accountability Guide to develop a system similar to the one OPM outlines in its guide. They also told us that Commission managers were presented with a copy of the Accountability Guide and that their employees are aware of the system. According to the Commission’s Human Resources Manager, the accountability system the agency developed in response to OPM’s recommendation is in the Commission’s Administrative Instructions Manual and its fiscal year 2000 Human Resources Plan. The Commission has taken various actions to improve its human resources management since OPM’s 1999 review, such as conducting an employee survey in fiscal year 2000 and developing an employee handbook. Although the Commission has also updated several key sections of its administrative manual, most of the manual was published in April 1999, before OPM issued its report. Furthermore, the Commission’s most recent annual performance plan does not refer to a human capital accountability system, nor does it detail human capital goals or baselines to use in evaluating such goals. OPM Recommendation: Delegate human resources management authorities to managers in all program areas. Hold managers accountable for exercising the delegations through the Human Resources Management Accountability System. The Commission has not implemented this recommendation. Overall, the Staff Director’s authority for most human resources decisions remains essentially the same as in OPM’s 1999 report findings. According to Commission officials, managers can recommend employees for hire, promotion, and awards and conduct annual and mid-year reviews of their staff. However, the Staff Director must approve all hiring and promotion decisions as well as managers’ evaluations of employees before appraisals are given to employees. OPM Recommendation: Develop a system for periodically collecting employee feedback regarding human resources services and policies. Incorporate that feedback in the Human Resources Management Accountability System. The Commission has not implemented this recommendation. To date, the Commission has not developed a formal system to regularly collect employee feedback about its human capital services and policies, even though a similar recommendation to obtain customer feedback and track customer views was also made in OPM’s earlier 1996 review. In fiscal year 2000, the Commission administered a staff survey on human resources and other Commission issues. According to officials, the Commission plans to administer another staff survey in the fall of 2004. However, the Commission has not developed plans to survey staff on a regular basis. In addition, since the Commission was unable to locate the results of its 2000 survey, its managers cannot use earlier human capital findings to systematically set goals and make improvements. According to OPM officials, OPM will conduct a Web-based Human Capital Survey of Commission staff beginning in September or October of 2004. OPM Recommendation: Require that all managers make progress reviews and performance appraisals in a timely manner when the Human Resources Division notifies them they are due, and require that the Staff Director review appraisals when they are made without delay. The Commission has implemented this recommendation, which was also made in OPM’s 1996 review. According to the Commission’s Human Resources Director, the agency is on schedule for its fiscal year 2004 performance appraisals. Commission guidance on the 2004 performance appraisal cycle requires Commission supervisors and managers to conduct annual and mid-year performance reviews of their staff. For non-Senior Executive Service employees, the process is outlined in a memorandum that the Human Resources Director sends annually to Commission supervisors and managers. OPM Recommendation: With employee involvement, consider developing a new performance management system linked to organizational and agency goals established under the Commission’s Strategic Plan. The Commission has not implemented this recommendation. The Commission’s performance management system is described in its Administrative Instructions Manual, most of which was issued in April 1999—6 months before OPM issued the recommendations in its October 1999 report. The Administrative Instructions do not clearly require that employees’ performance plans link individual staff goals to broader strategic goals. The parts of the manual that set forth the Commission’s policies and procedures on appraisals make no reference to the Commission’s strategic plan, nor does it specify how to link individual staff goals to the Commission’s strategic goals or how to involve employees in this process. GAO Recommendation: Monitor the adequacy and timeliness of project cost information that the Staff Director provides to Commissioners and make the necessary adjustments, which could include providing information on a monthly, rather than a quarterly, basis and as necessary. The Commission has not implemented this recommendation. In our 2003 review, we found that the Commission’s procedures did not provide for the Commissioners to systematically receive project cost information—a key element of good project management. As a result, the Commissioners approved the majority of projects and products each year without having any specific information on how much the project would cost, or how much similar projects have cost in past years. In the Commission’s June 2004 letter responding to our 2003 recommendations, the Staff Director stated that this recommendation spoke to “Commission policy on the proper level and mode of interaction between the Commissioners and staff … the Commissioners have reaffirmed on numerous occasions the current policy regarding interaction with staff.” He added that the Commission “is continuing to monitor the adequacy and timeliness of project cost information provided to Commissioners.” According to the Staff Director, his office provides the Commissioners with cost information for each project and office on a quarterly basis, and they began doing so during the last quarter of 2003. However, the cost report for the second quarter of fiscal year 2004, ending March 31, was not sent to the Commissioners until June 30, 2004, and was sent in response to requests from the Commissioners for this information. It is also not clear that the Commission is monitoring the adequacy and timeliness of project cost information, as recommended. For example, the quarterly report for the second quarter of 2004 cites costs for only 8 of the 12 projects outlined in the Commission’s fiscal year 2004 performance plan. GAO Recommendation: Adopt procedures that provide for increased Commissioner involvement in project implementation and report preparation. The Staff Director does not agree with this recommendation and has not implemented it. In our 2003 review, we found that Commissioners have limited involvement in the management of projects once they have been approved. As a result, we recommended that the Commission adopt procedures for increasing Commissioner involvement after project implementation by providing them with project updates and allowing them to review the product at various stages in the drafting process, so that they participate more actively in shaping products released to the public. The Staff Director did not agree with this recommendation and told us that he believes that the current procedures that govern Commissioner involvement in the development of products are appropriate and efficient. In his June 2004 letter responding to our recommendations, the Staff Director wrote that the responsibility for determining policy on Commissioners’ interaction with the staff is “delegated by statute to the Commissioners.” According to the Staff Director, the Commissioners requested that he assess the situation and issue recommendations on their involvement in report preparation. The Staff Director said that involving the Commissioners in the writing stage would “bog down” the process and that it would be difficult to incorporate the viewpoints of the eight Commissioners. To date, the Commission has not adopted any procedures to increase Commissioner involvement in the report preparation stage. GAO Recommendation: Establish greater controls over contracting activities in order to comply with the Federal Acquisition Regulation. Although the Staff Director disagreed with this recommendation, the Commission took one step towards establishing greater controls by contracting with a contracts and procurement specialist to supplement its operations. In 2003, we reported that the Commission lacked sufficient management controls over its contracting procedures. We found that, in fiscal year 2002, the Commission had not followed proper federal procedures in awarding most of its 11 contracts. Moreover, we found that the Commission failed to follow procedures that would allow it to track vendors’ performance against objective measures and ensure that public funds are being used effectively. While the Staff Director disagreed in his June 2004 response letter with the need for the actions associated with this recommendation, he later told us that the Commission “could be stronger” in the area of procurement. Since our 2003 report was issued, the Commission has supplemented its contracts and procurements operations by contracting with a contracts and procurements specialist with over 30 years of experience in government contracting. According to Commission officials, this specialist began providing services to the Commission in December 2003 and generally addresses complex procurement issues. GAO Recommendation: Take steps immediately in order to meet the financial statement preparation and audit requirements of the Accountability of Tax Dollars Act of 2002 for fiscal year 2004. The Commission has not implemented this recommendation. In 2003, we found that the Commission’s fiscal activities had not been independently audited in at least 12 years. We concluded that the Commission’s limited financial management controls and lack of external oversight makes the Commission vulnerable to resource losses due to waste, mismanagement, or abuse. Although in the June 2004 response, the Commission reported working with its accounting vendor to ensure that it would meet these requirements, as of August 2004, the Commission had not taken the necessary steps, such as hiring an independent auditor, to ensure that it will meet the requirements of the Accountability of Tax Dollars Act this year. Friendly M. VangJohnson and Caroline Sallee made significant contributions to this report. In addition, Richard P. Burkard, Elizabeth H. Curda, Julian P. Klazkin, Benjamin T. Licht, Corinna Nicolaou, and Michael R. Volpe provided key technical and legal assistance throughout the engagement. | The Chairmen of the Senate and House Committees on the Judiciary asked GAO to determine (1) the extent of the U.S. Commission on Civil Rights' compliance with the requirements of the Government Performance and Results Act (GPRA) of 1993, (2) what federal oversight is provided to the Commission, and (3) the status of the implementation of recommendations from GAO's past reviews of the Commission. The U.S. Commission on Civil Rights--an independent federal agency that monitors and reports on the status of civil rights in the United States--has not fully complied with the requirements of GPRA. Under this act, agencies are required to submit strategic plans and annual performance plans that detail their long-term and annual goals as well as information on how they plan to meet these goals. GPRA also requires agencies to submit annual performance reports that provide information on their progress in meeting the goals. However, the Commission has not updated or revised its strategic plan since 1997. Without revisiting its strategic goals, the Commission lacks a firm basis on which to develop its annual goals and evaluate its performance. In addition, its most recent annual performance plan and annual performance report contain weaknesses that limit the agency's ability to effectively manage its operations and communicate its performance. For example, the performance plan does not discuss the Commission's strategies or resources for achieving its goals, does not provide budgetary information for its programs, and does not provide performance indicators for some annual goals. Similarly, the performance report does not account for the Commission's performance for many of the annual goals set forth in its performance plan and does not provide plans, schedules, or recommendations for addressing each of the Commission's unmet goals. The Office of Management and Budget (OMB) and the Office of Personnel Management (OPM) have provided oversight for the Commission's budgetary and human capital operations in recent years. OMB's oversight has focused on the Commission's budget requests and GPRA plans and reports. OPM conducted two reviews of the Commission's human capital management systems in the 1990s and made recommendations for improvement, including improvements to its grievance and performance appraisal systems. Although the Commission has implemented some of OPM's earlier recommendations, it has not implemented five of six broader, systemic recommendations made in 1999 for improvement to its human capital management systems. Unlike many other executive agencies, the Commission does not have an Inspector General to provide oversight of its operations beyond OMB and OPM. GAO has conducted several reviews of the Commission's management operations in recent years. The Commission took some actions in response to the recommendations in GAO's 1994 and 1997 reports. However, the Commission has not implemented three of the four recommendations in GAO's October 2003 report for improving the agency's management and procurement practices. |
In part to improve the information available and management of DOD’s acquisition of services, in fiscal year 2002 Congress enacted section 2330a of title 10 of the U.S. Code, which required the Secretary of Defense to establish a data collection system to provide management information on each purchase of services by a military department or defense agency. The information DOD is to collect includes, among other things, the services purchased, the total dollar amount of the purchase, the form of contracting action used to make the purchase, and the extent of competition provided in making the purchase. In 2008, Congress amended section 2330a to add a requirement for the Secretary of Defense to submit an annual inventory of the activities performed pursuant to contracts for services on behalf of DOD during the preceding fiscal year. The inventory is to include a number of specific data elements for each identified activity, including: the function and missions performed by the contractor; the contracting organization, the component of DOD administering the contract, and the organization whose requirements are being met through contractor performance of the function; the funding source for the contract by appropriation and operating agency; the fiscal year the activity first appeared on an inventory; the number of contractor employees (expressed as full FTEs) for direct labor, using direct labor hours and associated cost data collected from contractors; a determination of whether the contract pursuant to which the activity is performed is a personal services contract; and a summary of the information required by section 2330a(a) of title 10 of the U.S. Code. Within DOD, AT&L, P&R, and the Comptroller have shared responsibility for issuing guidance for compiling and reviewing the inventory. P&R compiles the inventories prepared by the components, and AT&L formally submits a consolidated DOD inventory to Congress no later than June 30 of each fiscal year, though some inventory submissions have been later. DOD has submitted annual, department-wide inventories for fiscal years 2008 through 2013, the most recent submitted on July 2, 2014 (see table 1). Since DOD implemented the department-wide inventory of contracted services, the primary source used by DOD components to compile their inventories, with the exception of the Army, has been Federal Procurement Data System-Next Generation (FPDS-NG). As we have previously reported, the FPDS-NG—the government’s central repository for contracting data—has several limitations that impact its utility for purposes of compiling a complete and accurate inventory. For example, FPDS-NG does not capture the number of contractor FTEs or direct labor hours used to perform each service, does not capture any services performed under contracts that are predominately for supplies, and does not identify more than one type of service purchased for each contract action. As the inventory is required to identify each activity performed pursuant to a contract for services, the use of FPDS-NG as the basis for the data in the DOD components’ inventories does not satisfy the inventory statute, and limits the usefulness of inventory data in making management decisions. As we previously reported, to obtain better visibility of its service contractor workforce, the Army developed its Contractor Manpower Reporting Application (CMRA) in 2005 to collect information on labor-hour expenditures by function, funding source, and mission supported on contracted efforts, and has used CMRA as the basis for its inventory. CMRA captures data directly reported by contractors on services performed at the contract line item level, including information on the direct labor dollars, direct labor hours, total invoiced dollars, the functions performed, and the organizational unit on whose behalf the services are being performed. In instances where contractors are providing different services under the same task order, or are providing services at multiple locations, contractors can enter additional records in CMRA to capture information associated with each type of service or location. It also allows for the identification of services provided under contracts for goods. Within 90 days after an inventory is submitted to Congress, section 2330a(e) of title 10 of the U.S. Code requires the secretaries of the military departments or heads of the defense agencies to complete a review of the contracts and functions in the inventory for which they are responsible. P&R, as supported by the Comptroller, is responsible for, among other things, developing guidance for the conduct and completion of this review. As part of this review, the military departments and defense agencies are to ensure that: any personal services contracts on the inventory were properly entered into and performed appropriately; the activities on the list do not include any inherently governmental functions; and to the maximum extent practicable, the activities on the inventory do not include any closely associated with inherently governmental functions. Section 2330a(e) also requires under this review that the secretaries of the military departments and heads of defense agencies identify work that should be considered for conversion to government performance, or insourced, pursuant to section 2463 of title 10 of the U.S. Code, or to a more advantageous acquisition approach. Section 2463 specifically requires the Secretary of Defense to make use of the inventory to identify critical functions, acquisition workforce functions, and closely associated with inherently governmental functions performed by contractors, and that the Under Secretary of Defense for P&R implement guidelines and procedures to give special consideration to converting those functions to DOD civilian performance. Further, section 808 of the National Defense Authorization Act for Fiscal Year 2012 requires the Secretary of Defense to issue guidance to the military departments and the defense agencies to, among other things, eliminate contractor positions identified as performing inherently governmental functions and reduce, by 10 percent, funding for contractor staff performing closely associated with inherently governmental functions in fiscal years 2012 and 2013. As implemented by DOD, the secretaries of the military departments and heads of the defense agencies are instructed to use the fiscal year 2010 inventory, or the fiscal year 2011 inventory if the data are unknown in the 2010 inventory, as the baseline against which the 10 percent funding reductions will be made. GAO has ongoing work to assess DOD’s compliance with section 808. In addition, in December 2011 section 2330a of title 10 of the U.S. Code was amended to add a new subsection (f) requiring the secretaries of the military departments and heads of the defense agencies responsible for contracted services in the inventory to develop a plan, including an enforcement mechanism and approval process, to use the inventory to inform management decisions (see figure 1). Collectively, these statutory requirements mandate the use of the inventory and the associated review process to enhance the ability of DOD to identify and track services provided by contractors, achieve accountability for the contractor sector of DOD’s total workforce, help identify contracted services for possible conversion from contractor performance to DOD civilian performance, support DOD’s determination of the appropriate workforce mix, and project and justify the number of contractor FTEs included in DOD’s annual budget justification materials. We have issued several reports on DOD’s efforts to compile and review its inventory of contracted services, including initiatives to standardize contractor manpower data collection across the department. For example, in January 2011 we recommended that DOD develop a plan of action to facilitate the department’s stated intent to collect contractor manpower data and address other limitations in its approach to meeting inventory requirements, such as using FPDS-NG to compile the required inventories. In April 2012, we reported that DOD issued a plan in November 2011 to develop a common technology solution, leveraging existing data collection approaches, such as the Army’s CMRA system, that would allow the department to collectively meet the inventory requirements. DOD’s November 2011 plan provided for short-term and long-term actions intended to meet the requirements of 10 U.S.C. § 2330a. DOD stated that it was committed to assisting components as they implement their plans, especially those currently without reporting processes or infrastructure in place, by leveraging the Army’s CMRA system, processes, best practices, and tools to the maximum extent possible. Part of the long-term plan was to develop a comprehensive instruction for components to use on the development, review, and use of the inventories and for the Office of the Deputy Chief Management Officer, P&R, and other stakeholders to form a working group to develop and implement a common data system to collect and house the information required for the inventory, including contractor manpower data. DOD noted in its plan that it expected the data system to be operational and DOD components to be reporting on most of their service contracts by fiscal year 2016. While we found the plan represented a step in the right direction, it did not contain timeframes or resources needed, as we had previously recommended. Further, we found that DOD faced challenges in developing a common data collection system given the different requirements of the military departments and the remaining defense agencies. In May 2013, we reported that a November 2012 memorandum stated that DOD would establish a common data collection system based on the Army’s CMRA system—the Enterprise wide Contractor Manpower Reporting Application (ECMRA)—for DOD components to begin reporting data in time for the department’s fiscal year 2013 inventory submission, but did not expect that components would fully use the system for most of their contracts for services until fiscal year 2016. We found that the department had taken steps to implement interim CMRA-based data collection systems for the Air Force and Navy. At that time, DOD noted that it expected to field an interim CMRA-based data collection system that would be shared by the remaining defense agencies, which DOD subsequently fielded in September 2013. In May 2014, however, we found that while the department took interim steps, it had not fully implemented the common data collection system as called for in its November 2011 plan. We found that DOD components’ 2012 inventory review certifications better addressed DOD’s required reporting elements than in prior years, but the department continues to face challenges in assuring that all DOD components conduct and report on the required reviews. For example, as of September 2014, 32 of the 33 components certified that they had reviewed their inventory of contracted services and, overall, more components addressed more of DOD’s required reporting elements than in fiscal year 2011. However, the Air Force, which accounts for about 20 percent of DOD’s obligations for contracted services, did not submit a certification letter, as required. Further, the Army, which represents 30 percent of DOD’s obligations for contracted services, submitted a certification though its review was incomplete at the time the Secretary of the Army signed the letter. This occurred, in part, because it did not include a review of functions in one command or review functions that were transferred between two commands. DOD also continues to face challenges in fully implementing the CMRA-based common data system that is intended to collect the required data for the inventories DOD components must review, thus jeopardizing its plan to have all components using this system to collect manpower data reported by contractors by 2016. DOD recently directed a study to identify and develop other enterprise solutions to address the inventory data collection requirements by December 2014. It is uncertain whether DOD will continue to implement its previous plan for a DOD-wide system based on the Army’s CMRA system. DOD’s February 2013 guidance for the fiscal year 2012 inventory, issued jointly by AT&L and P&R, included two changes that DOD officials believed would improve the completeness and granularity of the inventory review data reported in the certification letters from the prior year. The changes included an increase in the percentage of contract functions to review from the component’s inventories from 50 percent to 80 percent and one additional data element—the review results table. The guidance instructed components to include at a minimum seven elements in their certification letters (see table 2). Overall, we found DOD components generally addressed more of the required elements in their fiscal year 2012 certification letters than they had in fiscal year 2011 (see table 3). Our analysis found that seven components, representing about 15 percent of the total dollars included in the department’s inventory of contracted services, addressed all seven elements required, and nearly 80 percent of the components addressed at least five of the seven elements. We found that about one-third of the components did not address the required element to identify the appropriate manpower mix and about half did not address the required element to identify actions taken to ensure appropriate reallocation of resources based on the reviews. While this represents an improvement over the inventory review results reported in the fiscal year 2011 certification letters, we also found several significant limitations with the fiscal year 2012 inventory results reported in the certification letters. For example, The Air Force, which represented about 20 percent of DOD’s contract obligations for services in fiscal year 2012, did not submit a certification letter. Air Force officials stated that they focused on completing the fiscal year 2013 inventory review rather than submitting the required fiscal year 2012 certification letter. The Army, which accounted for about 30 percent of DOD’s contract obligations reported in DOD’s fiscal year 2012 inventory for contracted services, certified in its April 2014 letter that it reviewed more than 80 percent of contracted functions from its inventory, which an Army official told us was based on contract invoice amount. However, the data supporting the certification letter revealed issues at three Army commands that comprise more than a quarter—or $23.4 billion—of the Army’s reported invoiced dollars, indicating that the Army may have overstated the contract functions reviewed. For example, the certification letter did not include review data from the Army Acquisition Support Command because their review was not complete when the Army submitted its certification letter. This command represented $10.7 billion, or 14 percent, of the Army’s total invoiced dollars for contracted services reported in fiscal year 2012. Further, the Army Installation Management Command, which accounted for 30 percent of Army’s reported contractor FTEs performing closely associated with inherently governmental functions in fiscal year 2011, transferred responsibility for some of these functions to the Army Materiel Command. However, officials at the Army Materiel Command reported that they did not include these transferred functions in the fiscal year 2012 review, stating that there were too many new contracts to review in one year. The officials added that for this and other reasons, the command’s review data for fiscal year 2012 are not complete or accurate. While 21 of the 32 components certified that they reviewed at least 80 percent of contract functions, we found that components interpreted contract functions differently. DOD’s February 2013 guidance for the fiscal year 2012 inventory required components to review at least 80 percent of the functions associated with all contracts, task orders, delivery orders, or interagency acquisition agreements listed in the inventory. The guidance further states that priority shall be given to contracts previously not reviewed or those that may present a higher risk of inappropriate performance. However, the guidance does not specify how to determine the percent of contract functions nor identify what types of contracts may be at a higher risk of inappropriate performance. As such, some components specified that they reviewed a percentage of product service codes, contract obligations, individual contracts, or contractor FTEs identified in their inventories. In other cases the letters did not indicate what the component considered a contract function. Certification letters also varied in terms of the information and insights provided on the methodologies used to review the selected contract functions in the inventories, and it was not clear from the letters whether all components considered contractor performance and contract administration when reviewing selected contracts. DOD’s February 2013 guidance for the fiscal year 2012 inventory requires components to consider the nature of contract performance and administration, but it does not define what processes may be used to review functions to determine the types of activities performed. For example, one component’s review compared inventory data to basic information in the component’s contract writing system and relied on the component’s acquisition planning process to determine how the contract was performed. At another component, the review included coordination with program managers, contracting officers, contracting officer representatives, and budget officials and consideration of a range of data and documents to identify and understand whether the work performed under the contract included inappropriate functions. One component we interviewed, the Army, provided additional details in its guidance on which officials should participate in the review and how to assess contract activities. The Army requires a checklist at various points in the contract cycle, including contract award and modification, to identify activities performed under the contract and help inform the inventory review process. The Army requires the reviewer to be a person in the requiring activity who is familiar with how the contract is administered and performed and thoroughly understands the work being performed. Some officials we interviewed expressed confusion over the various functional categories in the review results table and noted that the distinction between inherently governmental, closely associated with inherently governmental, and personal services is not always clearly understood. Officials we interviewed at four of the six components provided supplemental inventory review guidance including definitions for some of the functions to help define inappropriate performance or contract activities that require additional management attention. The lack of specific guidance on how to identify or review contract functions may have led to components understating the degree to which contractors were providing services closely associated with inherently governmental functions. Fifteen of the 32 components identified contractors performing closely associated with inherently government functions in the fiscal year 2012 inventory and provided more specific information on contractor FTEs performing these functions than in fiscal year 2011. For example, eight components reported that they had contracts containing these functions in their fiscal year 2011 certification letters without providing specific information on the number of contractor FTEs. In their fiscal year 2012 certification letters, however, these eight components identified the specific number of contractor FTEs and obligations associated with these functions. Another 15 components certified they did not have contractors performing closely associated with inherently governmental functions, while two components’ certification letters did not address closely associated with inherently governmental functions. Appendix II provides details on components that identified contractors performing inherently governmental functions. In May 2013, we found it was difficult to determine how many contractors are performing closely associated with inherently governmental functions based on components’ reported methodologies and inventory review results. Further, we found that DOD components may not have accurately identified the extent to which their contractors are performing closely associated with inherently governmental functions during their reviews. Based on our latest review of the fiscal year 2012 inventory review results, it is still not clear whether DOD components fully identified the extent to which their contractors are performing closely associated with inherently governmental functions. Our latest review, similar to our prior work, found that DOD contracts for significant amounts of professional and administrative and management support services. A significant portion of contracts for these services were for services that are likely to be closely associated with inherently governmental functions. We identified total obligations for categories of contracted services that often include services closely associated with inherently governmental functions based on DOD components’ fiscal year 2012 inventory submission data. We compared those totals to the total obligations certified by DOD components as being for contractors performing closely associated with inherently governmental functions in the fiscal year 2012 certification letters. This analysis found significant gaps between these two categories, suggesting that some of the inventory review processes or methodologies may not be sufficient to accurately identify closely associated with inherently governmental functions. In contrast to the Navy and other defense agencies, even the Army’s incomplete review identified that nearly half of its total obligations in these categories were for the performance of closely associated with inherently governmental functions (see figure 2). With regard to identifying contractor FTEs that may be performing inherently governmental functions or unauthorized personal services, only the Army identified such contractors during their fiscal year 2012 inventory review. The Army reported that it had identified 62 contractor FTEs performing inherently governmental functions and five contractor FTEs providing unauthorized personal services, both figures representing significant declines since fiscal year 2011. As noted previously, however, we found the Army’s review of its fiscal year 2012 inventory may not have included the minimum requirement of 80 percent of contract functions. While the Air Force did not submit a certification letter in fiscal year 2012, Air Force officials told us that they had incorrectly identified contractors performing inherently governmental functions and unauthorized personal services in their 2011 inventory review. However, neither the Army nor the Air Force provided information as to how they resolved the prior instances in which they found that contractors were performing such functions. We previously recommended in May 2013 that the Secretary of Defense instruct components to provide updated information in certification letters on how they resolved the instances of contractors performing inherently governmental functions or unauthorized personal services identified in prior inventory reviews. DOD partially concurred with this recommendation, stating that DOD would focus on the fiscal year 2012 reporting requirements and that any instances of contractors performing inherently governmental functions or unauthorized personal services that persist from prior inventory reviews would be included and fully documented in the fiscal year 2012 and future inventory review processes. Further, DOD said it would verify that the certification letters contain a complete and accurate description of actions taken to resolve outstanding issues related to contractors performing these functions prior to closing the review process. Federal internal control standards call for managers to measure and assess performance over time to ensure effectiveness and efficiency of operations and compliance with applicable laws and regulations. Without accurate identification of the functions contractors are performing, DOD cannot be assured that proper oversight is in place or provide data to ensure that it is meeting statutory requirements to reduce, to the maximum extent practicable, the number of contractors performing closely associated with inherently governmental functions or assure that contractors are not performing inherently governmental functions. DOD issued guidance applicable to the components’ fiscal year 2013 inventories in March 2014. To assist components’ reviews, this guidance provides definitions for inherently governmental, closely associated with inherently governmental, personal services, and critical functions, consistent with the FAR and the Office of Federal Procurement Policy (OFPP) guidance. DOD’s guidance, however, does not specify the percent of contract functions to be reviewed nor identify the basis to determine the percentage of contract functions to be reviewed. P&R officials indicated that the intent is for components to review all contract functions; however, some component officials we interviewed said it was not clear to them. The March 2014 guidance also indicates that components should review the nature or way the contract is performed and administered as well as the organizational environment within which it is operating beyond what can be accessed via a review of the information listed within the inventory, but does not provide specific approaches to do so. A key factor to facilitate the components’ review of the functions that contractors perform is the availability of accurate and reliable data. As we reported in May 2014, DOD officials noted that the lack of dedicated resources has been a key factor hindering implementation of its planned ECMRA, the common data system based on the Army’s CMRA system. To address this factor and support the data collection efforts, DOD provided funding for six civilian FTEs for the Defense Human Resources Agency, starting in fiscal year 2015. We reported in May 2014 that DOD officials had anticipated these staff would comprise a new support office to coordinate DOD’s efforts to define business processes for compiling, reviewing, and using the inventory. This effort has encountered a number of challenges, which officials noted may jeopardize the department’s goal to fully implement ECMRA by fiscal year 2016. For example, the effort lacks a formal agreement, including roles and responsibilities, between the Assistant Secretary of Defense for Readiness and Force Management and the Defense Human Resources Agency, the parties responsible for implementing ECMRA and related business processes. As we concluded in our May 2014 report, DOD did not have a comprehensive plan with timeframes and milestones to measure its progress toward developing a common contractor manpower data system and associated business processes. More recently, a September 17, 2014 memorandum from the Acting Assistant Secretary of Defense for Readiness and Force Management appointed a Strategic Review and Planning Officer as the official responsible for identifying and developing enterprise solutions related to the inventory data collection requirements prescribed by title 10, U.S. Code, section 2330a. The official is authorized to identify, develop, and consider all reasonable options, in both the short and long-terms, and propose courses of action by December 1, 2014 to P&R. Once a course of action is approved, the memorandum directs the official to develop a detailed implementation plan, but does not provide timeframes for completion. A P&R official told us that, until such time as there has been a decision whether to pursue a new approach or continue forward with implementation of ECMRA, DOD will defer using the additional resources allocated for the Defense Human Resources Agency. This review raises a question about whether DOD will continue to implement a DOD-wide inventory data collection system modeled after the Army’s CMRA system or attempt to develop a new system. Until such time as DOD components are able to collect the required data for their inventories, the utility of their inventory reviews for making workforce mix decisions will be hindered. The military departments have not developed plans or enforcement mechanisms to use the inventory of contracted services to inform strategic workforce planning, workforce mix, and budget decision-making processes, as statutorily required. Despite the lack of specific plans, the military departments have taken some initial steps to use the inventory to inform management decisions such as insourcing and estimating FTEs for budgetary purposes. Disparate offices are responsible for the various decision-making processes at the military departments, and the secretaries of the military departments have not assigned specific responsibility for coordinating among these offices to develop plans to use the inventory to inform such decisions. In part, the absence of accountable officials to integrate the use of the inventory leaves the department at continued risk of not complying with the applicable legislative requirements to use the inventory to support management decisions. P&R has overall responsibility for developing and implementing DOD’s strategic workforce plan to shape and improve the DOD’s civilian workforce, including an assessment of the appropriate total force mix. P&R issued guidance that designated responsibility for the development of the strategic workforce plan to the Deputy Assistant Secretary of Defense for Civilian Personnel Policy, but did not require use of the inventory. This guidance predates the statutory requirement to use the inventory to inform strategic workforce planning. For example, the Fiscal Years 2013-2018 Strategic Workforce Plan, the most recent plan available at the time of our review, states that DOD’s plans for identifying and assessing workforce mix will leverage the inventory of contracted services, but does not provide any additional details on using the inventory. None of the three military departments has developed a statutorily required plan or enforcement mechanism to use the inventory of contracted services and generally they have not developed guidance or processes for these purposes (see table 4). DOD has two department-wide policies for determining workforce mix— DOD directive 1100.4 and DOD instruction 1100.22—but neither currently requires the use of the inventory to inform workforce mix planning. DOD Directive 1100.4, dated February 2005, provides general guidance concerning determination of manpower requirements, managing resources, and manpower affordability. According to P&R officials, revisions to this directive, which are currently under review, will be revised to explicitly require use of the inventory to inform budgeting and total force management decisions. DOD Instruction 1100.22, dated April 2010, provides manpower mix criteria and guidance for determining how individual positions should be designated based on the work performed. This instruction does not direct the military departments to develop a plan to use the inventory to inform management decisions, as DOD issued it before the enactment of the requirement for developing such plans. DOD’s primary insourcing guidance is reflected in April 4, 2008 and May 28, 2009 memoranda. These memoranda reiterate statutory requirements by calling for DOD components and the military departments to use the inventory of contracted services to identify functions for possible insourcing and to develop a plan for converting these functions within a reasonable amount of time. Among the military departments, however, only Army has guidance and a process that requires use of the inventory of contracted services for insourcing. However, the military departments have not issued guidance for managing workforce mix that requires the use of the inventory of contracted services (see table 5). DOD’s Financial Management Regulation provides, among other things, guidance to the military departments on budget formulation and presentation; however, these regulations do not require the military departments to use the inventory in formulating and presenting their budgets. At the military department level, the Air Force has issued additional instructions in terms of budget formulation and presentation. However, the Air Force’s guidance does not require the use of the inventory. The Comptroller has issued supplemental guidance requiring, among other things, that the military departments and defense components provide information on the number of FTEs as required under 10 U.S.C. § 235, but this guidance does not require reporting the amount of funding requested for contracted services. The Comptroller guidance for budget submissions from all components has remained similar for the past three fiscal years, instructing DOD components to ensure that contractor FTEs reported in the budget exhibit are consistent with those in the DOD’s inventory of contracted services. Both Navy and Air Force officials reported that they used the inventory of contracted services to estimate the number of contractor FTEs for inclusion in their budget request. The Army budget office does not have a process to use the inventory to inform budgeting and could not identify how the Army estimated FTEs in the Army’s budget submission (see table 6). Within the military departments various offices are responsible for conducting the compilation and review of the inventory of contracted services, managing workforce mix decisions, and conducting budgeting (see table 7). Based on our analysis, however, no single office or individual is responsible for leading or coordinating efforts between the various functional areas to develop a plan and enforcement mechanism to use the inventory to inform these processes. During interviews, officials at each of the military departments were uncertain who was responsible for developing a plan and enforcement mechanisms to use the inventory to inform management decisions. For example, the Assistant Secretary of the Army for Acquisition, Logistics, and Technology indicated that the responsibility for developing the Army’s plan fell to the manpower community, but a manpower official stated his office is not explicitly tasked with this responsibility. Internal control standards in the federal government state that management should establish an organizational structure, delegate authority for key roles, and assign responsibility to enable an organization to achieve management objectives and to comply with laws. The absence of clearly defined roles and responsibilities for integrating the use of the inventory in these processes, as appropriate, leaves the department at continued risk of not complying with the applicable legislative requirements to leverage the inventory to support management decisions. In the six years since DOD first submitted its initial department-wide inventory of contracted services and conducted the associated reviews, DOD’s progress remains uneven and uncertain. DOD components addressed more of the reporting elements prescribed by P&R and AT&L guidance, but it is less clear that the fiscal year 2012 reviews were any more informative than prior reviews in certain key areas. The Air Force failed to submit an inventory review certification letter while the Army submitted incomplete review data for several major commands. In addition, DOD components may not have fully identified all instances of contractors performing closely associated with inherently governmental functions. DOD’s fiscal year 2013 guidance does not fully address some of the shortcomings our review identified, including how to identify contracts to review or the methodologies or approaches to use to ensure the components’ inventory reviews adequately assess contractor activities. As a result, components may not fully identify instances of contractors providing services that are closely associated with inherently governmental functions. Without a thorough review of contractor activities, DOD risks becoming overly reliant on contractors to support core missions. Further, it is up to the department to provide a system that is capable of providing accurate and reliable data to support these reviews. Continued delays and uncertainties in implementing its planned ECMRA system hinder achievement of this objective. We had previously recommended that DOD develop a plan of action with timeframes and milestones to measure DOD’s progress in implementing a common data system, but DOD has yet to do so. DOD identified the lack of dedicated resources as the primary obstacle to resolving technical issues, including help desk support and establishing common processes across the department. DOD allocated fiscal year 2015 funding to support this effort; however, DOD is delaying committing these resources pending the completion of a review to identify and develop appropriate enterprise solutions, including short- and long-term options, no later than December 1, 2014. Continued delays in developing an implementation plan increases the risk that DOD will be unable to collect the statutorily required data needed to serve as the basis for DOD’s inventory review process. Additionally, the military departments generally do not have plans to use the inventories for strategic workforce planning, workforce mix and insourcing decisions, or budget and programming decisions. The same is generally true for the processes that underlie these decisions, with the exception of the Army’s efforts to use the inventory and associated review process to help inform workforce mix and insourcing decisions. One factor contributing to this condition is the fact that multiple offices are responsible for performing tasks within their specific area of responsibility, but there are no offices or individuals that have been specifically tasked to lead or coordinate efforts to facilitate the use of the inventory within each of the military departments. Internal control standards state that management should establish an organizational structure, delegate authority for key roles, and assign responsibility to enable an organization to achieve management objectives and to comply with laws. Entrusting one or more individuals with the responsibility for carrying out these requirements is likely to produce more positive results quicker than if DOD continues to engage in piecemeal and ad hoc efforts within each functional area. The absence of clearly defined roles and responsibilities for integrating the use of the inventory in these processes, as appropriate, leaves the department at continued risk of not complying with the applicable legislative requirements to leverage the inventory to support management decisions. To better implement the requirements for reviewing the inventory of contracted services, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics and the Under Secretary of Defense for Personnel and Readiness work jointly to revise annual inventory review guidance to clearly identify the basis for selecting contracts to review and to provide approaches the components may use to conduct inventory reviews that ensure the nature of how the contract is being performed is adequately considered. If DOD intends for components to review less than 100 percent of its contracts, then the guidance should clearly identify the basis for selecting which contracted functions should be reviewed. To help facilitate the department’s stated intent to develop a common data collection system to fully collect statutorily required data, we recommend that the Under Secretary of Defense for Personnel and Readiness approve a plan of action, with timeframes and milestones, for rolling out and supporting a department-wide data collection system as soon as practicable after December 1, 2014. Should a decision be made to use or develop a system other than the e-CMRA system currently being fielded, we recommend that the Under Secretary of Defense for Personnel and Readiness document the rationale for doing so and ensure that the new approach will provide data that satisfies the statutory requirements for the inventory. To help ensure that the inventory of contracted services is integrated into key management decisions as statutorily required, we recommend that the Secretaries of the Army, Navy, and Air Force identify an accountable official within their departments with responsibility for leading and coordinating efforts across their manpower, budgeting, and acquisition functional communities and, as appropriate, revise guidance, develop plans and enforcement mechanisms, and establish processes. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix III, DOD concurred with our recommendations and described the actions it plans to take though it did not provide timeframes for completing such actions. DOD also provided technical comments, which we incorporated in the report as appropriate. In response to our recommendation to revise its inventory review guidance to provide more clarity on which contracted functions should be reviewed and to provide approaches the components may use to ensure that the nature of how the contract is being performed is considered, the department noted it is currently enhancing its guidance for the fiscal year 2014 inventory of contracted services and intends to have components review and certify 100 percent of the services reported in their respective inventory. DOD did not address whether DOD’s guidance will provide approaches components may use to conduct inventory reviews to ensure contract performance is adequately considered. There may be no singular approach appropriate for all DOD components; however, DOD could provide a range of suggested approaches to ensure components accurately identify functions performed. While we appreciate DOD’s actions to address the recommendation, the fact that only seven of the components fully addressed each element contained in AT&L and P&R’s previous guidance underscores, in our view, the need for more direct involvement by DOD to ensure compliance. In response to our recommendation to approve a plan of action with timeframes and milestones to help facilitate the development of a common data collection system, DOD noted that all DOD components are using ECRMA to facilitate compilation of their respective inventory of contracted services. DOD agreed that if the Department decides to move away from ECMRA, the decision will be fully documented, ensuring satisfaction of statutory requirements. In response to our recommendation to identify an accountable official within the military departments to help ensure the inventory of contracted services is integrated into key management decisions and to coordinate efforts across manpower, budgeting and acquisition communities, DOD agreed. DOD also indicated that a cognizant accountable official should be identified at the remaining defense components (e.g. defense agencies, field activities, and combatant commands). While our work in this area focused on the military departments, we agree that it is important for components across DOD to ensure each organization develops a plan and enforcement mechanisms for using the inventory of contracted services to inform management decisions. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Under Secretary of Defense for Personnel and Readiness, the Secretaries of the Army, Air Force, and the Navy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Section 951(a) of the National Defense Authorization Act for Fiscal Year 2014 directs GAO to report, for fiscal years 2014, 2015, and 2016, on DOD’s implementation of title 10 U.S. Code section 2330a subsections (e) and (f). To satisfy the mandate for 2014, we assessed DOD’s efforts to (1) implement subsection (e) to review contracts and activities in the inventory of contracted services for the fiscal year 2012 inventory and (2) implement subsection (f) to develop plans and processes to inform how the inventory will be used to facilitate strategic workforce planning, workforce mix, and budget decisions. We used data from the fiscal year 2012 inventory as it was the most recent inventory at the time of our review. To assess the extent to which DOD components—to include the three military departments and the defense agencies— implemented the required review of contracts and activities in the inventory of contracted services pursuant to subsection(e), for the fiscal year 2012 inventory, we examined the guidance related to the fiscal year 2012 inventory review process which the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (AT&L) and the Acting Under Secretary of Defense for Personnel and Readiness (P&R) issued on February 4, 2013. The February 2013 guidance requires components to certify completion of the review and report on seven elements including the contract selection criteria and methodologies used to conduct the reviews; the extent to which contractors were found to be performing certain functions, to include inherently governmental and closely associated with inherently governmental; and to the extent necessary, a plan to realign performance of such functions to government performance. We analyzed all unclassified fiscal year 2012 certification letters submitted by 32 components to P&R as of September 2014 to determine if components reported on all seven required elements. We did not analyze any classified certification letters submitted, such as that by the Defense Intelligence Agency. We also reviewed the Office of Federal Procurement Policy’s (OFPP) November 5, 2010 guidance for civilian agencies’ service contract inventories. This guidance directs agencies to give priority consideration to reviewing certain categories of contracted services that the guidance and GAO’s prior work have indicated often include closely associated with inherently governmental functions. The guidance identifies 15 product service codes describing these categories of contracted services. We reviewed the total amount DOD components obligated for closely associated with inherently governmental functions with the total amount they reported obligating for the categories of contracted services identified in the OFPP’s guidance and two additional product service codes identified in GAO’s prior work to compare the dollars obligated and contractor full-time equivalents (FTEs) reported for these product services codes. We did not independently assess the accuracy or reliability of the underlying data supporting the components’ inventories of contracted services and associated reviews. Our previous work, however, identified data limitations with those DOD components using data from FPDS-NG as the basis for their inventories. We discuss these limitations in the report, as appropriate. In performing our work to assess the extent to which DOD implemented subsection (e) to review contracts and activities in the inventory of contracted services for the fiscal year 2012 inventory we interviewed cognizant officials from AT&L; P&R; the Under Secretary of Defense (Comptroller); the Departments of the Army, Navy, and Air Force; and three defense components – the Defense Logistics Agency (DLA); the Defense Threat Reduction Agency (DTRA), and the Office of the Director of Administration and Management (ODA&M). We selected the Army, Navy, and Air Force for additional review because they represented about 74 percent of the obligations reported in the inventory and selected DLA and DTRA, because they reported having the most instances of contractors performing closely associated with inherently governmental functions, as expressed in contractor FTEs. We selected ODA&M because they reported having the fewest contractor FTEs performing closely associated with inherently governmental functions, despite having high obligations for professional services and program management services, two categories of contracted services known to often include closely associated with inherently governmental functions. To assess the extent to which DOD components have developed plans and processes to use the inventory to inform management decisions pursuant to subsection (f), we reviewed defense-wide and military department-specific strategic planning, manpower mix, and budgeting documentation and interviewed officials responsible for developing and using this guidance. To determine whether DOD guidance informing strategic planning, manpower mix and budgeting calls for the use of the inventory of contracted services in planning processes, we reviewed the DOD’s Fiscal Year 2013-2018 Strategic Workforce Plan Report and associated guidance for completing the plan, DOD’s instruction on manpower mix criteria, DOD’s memoranda guiding the conversion of contracted functions to government functions, and guidance issued by the Comptroller that informed the fiscal year 2013, 2014, and 2015 budget submissions. When applicable we reviewed workforce mix instructions and budgeting regulations. In addition we reviewed memoranda, slides, and meeting minutes to review processes that the military departments had underway to determine if these processes addressed the requirements of title 10 U.S.C. § 2330a(f). In performing our work to assess the extent to which DOD components have developed plans and processes to use the inventory to inform management decisions pursuant to subsection (f) we interviewed cognizant officials from AT&L, P&R and the Comptroller, and the three military departments. We discussed identification of a cognizant official for the plans and processes with acquisition, manpower, programming, and budgeting officials. In addition, we evaluated DOD’s progress in implementing a common data system since our most recent report in May 2014. To do so, we reviewed existing documentation including memoranda, planning documents, and guidance for establishing a common data system. We also interviewed officials from P&R to discuss progress toward the common system and associated business processes. We conducted this performance audit from May 2014 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on audit objectives. Appendix II: Comparison of Components’ Identification of Contractors Providing Services Closely Associated With Inherently Governmental Functions in their Fiscal Year 2011 and 2012 Certification Letters The Navy did not identify the number of FTEs, but noted they have 25 contracts that contained these functions. The agency did not identify the number of FTEs in current contracts, but noted they have contracts that contained these functions. The agency did not identify the number of FTEs, but noted that they had contractors performing these functions. The agency did not identify the number of FTEs, but noted that 4.5 percent of their sample of more than 50 percent of contract actions contained these functions. The agency did not identify the number of FTEs, but noted that several contracts contained these functions. The components did not identify the number of FTEs, but reported that 24 out of 950 contracts consolidated from the three components had contractors performing these functions. The commands did not identify the number of FTEs, but noted that “some requirements” contained these functions. As of September 2014, the Air Force has yet to provide a certification letter for the fiscal year 2012 inventory identifying contractor FTEs in either category. Further, the fiscal year 2011 data from the Air Force were based on preliminary estimates, and the Air Force did not provided a final certification letter for fiscal year 2011. The Office of the Secretary of Defense, Director of Administration and Management submitted a consolidated review on behalf of the Office of the Secretary of Defense, Washington Headquarters Service, and the Pentagon Force Protection Agency, but did not specifically identify which components reported contractors performing closely associated with inherently governmental functions. In addition, officials responsible for the consolidated review identified an error in the certification letter and updated it to reflect 14.7 contractor FTEs performing closely associated with inherently governmental functions. In addition to the contact named above Penny Berrier, Assistant Director; MacKenzie Cooper; Kate Eberle; Kristine Hassinger; John Krump; Caryn E. Kuebler; Jean McSween; Oziel Trevino; and Candice Wright made key contributions to this report. | DOD is the government's largest purchaser of contractor-provided services. In 2008, Congress required DOD to compile and review an annual inventory of its contracted services to include the number of contractors providing services to DOD and the functions these contractors performed, and in 2011, amended this statute to require DOD to plan to use that inventory to inform certain department-wide decision making processes. The National Defense Authorization Act for Fiscal Year 2014 mandated GAO to report on the required reviews and plans to use these inventories. For this report, GAO assessed the extent to which DOD components (1) reviewed contracts and activities in the fiscal year 2012 inventory of contracted services and (2) developed plans to use the inventory for decision-making. GAO reviewed relevant laws and guidance, reviewed component certification letters from 32 components, and interviewed DOD acquisition, manpower, programming, and budgeting officials. The Department of Defense (DOD) continues to face challenges in assuring that it conducts and reports on the results of its required inventory reviews. As of September 2014, 32 of the 33 components that were required to conduct an inventory review certified that they had done so and generally addressed more of the required reporting elements than in fiscal year 2011. However, GAO found limitations with the inventory review results. For example, the Air Force did not submit a fiscal year 2012 inventory certification letter and the Army's review was incomplete at the time its Secretary signed the certification. Further, components may not have fully identified all instances in which contractors were providing services that are closely associated with inherently governmental functions, a key review objective to help ensure that the DOD is not overly reliant on contractors to support core missions. DOD's March 2014 guidance, which is applicable to the fiscal year 2013 inventory, does not fully address some of the shortcomings GAO identified, including how to identify contracts for review or approaches to ensure that components adequately assess contractor activities. As a result, components may not fully identify instances of contractors providing services that are closely associated with inherently governmental functions. A key factor hindering the components' inventory reviews is the lack of accurate and reliable data. DOD has not resolved issues with implementing its planned common data system based on the Army's existing system. Further, in September 2014, DOD initiated a new review, due by December 2014, to identify and develop options to collect these data. This review raises a question of whether DOD will continue to implement a common data system modeled after the Army's system or attempt to develop a new system. DOD continues to lack a plan with timeframes and milestones to measure its progress toward implementing a common data system. These factors jeopardize DOD's goal to have all components, by 2016, collect statutory-required contractor manpower data. Further delays in resolving these issues will undermine the inventory's usefulness. The military departments generally have not developed plans to use the inventory of contracted services to facilitate DOD's strategic workforce planning, workforce mix, and budget decision-making processes, as statutorily required. Numerous offices are responsible for the various decision-making processes at the military departments, and the Secretaries of the military departments have not assigned specific responsibility for coordinating among these offices to do so. The absence of officials who are accountable for integrating the use of the inventory leaves the department at continued risk of not complying with the applicable legislative requirements to use the inventory to support management decisions. Internal control standards state that management should assign responsibility to enable an organization to achieve management objectives and to comply with laws. GAO recommends DOD revise inventory guidance to improve the review of contract functions, approve a plan of action with milestones and timeframes to establish a common data system to collect contractor manpower data, and designate a senior management official at the military departments to develop plans to use inventory data to inform management decisions. DOD concurred with GAO's recommendations. |
As computer technology has advanced, federal agencies have become dependent on computerized information systems to carry out their operations and to process, maintain, and report essential information. Virtually all federal operations are supported by computer systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions, deliver services to the public, and account for their resources without these cyber assets. Information security is thus especially important for federal agencies to ensure the confidentiality, integrity, and availability of their systems and data. Conversely, ineffective information security controls can result in significant risk to a broad array of government operations and assets, as the following examples illustrate: Computer resources could be used for unauthorized purposes or to launch attacks on other computer systems. Sensitive information, such as personally identifiable information, intellectual property, and proprietary business information, could be inappropriately disclosed, browsed, or copied for purposes of identity theft, espionage, or other types of crime. Critical operations, such as those supporting critical infrastructure, national defense, and emergency services, could be disrupted. Data could be added, modified, or deleted for purposes of fraud, subterfuge, or disruption. Threats to systems are evolving and growing. Cyber threats can be unintentional or intentional. Unintentional or non-adversarial threats sources include failures in equipment, environmental controls, or software due to aging, resource depletion, or other circumstances which exceed expected operating parameters. They also include natural disasters and failures of critical infrastructure on which the organization depends but are outside of the control of the organization. Intentional or adversarial threats include individuals, groups, entities, or nations that seek to leverage the organization’s dependence on cyber resources (i.e., information in electronic form, information and communications technologies, and the communications and information-handling capabilities provided by those technologies). Threats can come from a wide array of sources, including corrupt employees, criminal groups, and terrorists. These threat adversaries vary in terms of their capabilities, their willingness to act, and their motives, which can include seeking monetary gain, or seeking an economic, political, or military advantage. Table 1 describes the sources of cyber based threats in more detail. Cyber threat adversaries make use of various techniques, tactics, and practices, or exploits, to adversely affect an organization’s computers, software, or networks, or to intercept or steal valuable or sensitive information. These exploits are carried out through various conduits, including websites, e-mails, wireless and cellular communications, Internet protocols, portable media, and social media. Further, adversaries can leverage common computer software programs, such as Adobe Acrobat and Microsoft Office, as a means by which to deliver a threat by embedding exploits within software files that can be activated when a user opens a file within its corresponding program. Table 2 provides descriptions of common exploits or techniques, tactics, and practices used by cyber adversaries. Reports of successfully executed cyber exploits illustrate the debilitating effects they can have on the nation’s security and economy, and on public health and safety. Further, federal agencies have experienced security breaches in their networks, potentially allowing sensitive information to be compromised and systems, operations, and services to be disrupted. These examples illustrate that a broad array of federal information and critical infrastructures are at risk: In August 2015, the Internal Revenue Service (IRS) reported that approximately 390,000 tax accounts were potentially affected by unauthorized third parties gaining access to taxpayer information from the agency’s “Get Transcript” application. According to testimony from the Commissioner of the IRS in June 2015, criminals used taxpayer- specific data acquired from non-IRS sources to gain unauthorized access to information; however, at that time, he reported that approximately 100,000 tax accounts had been affected. These data included Social Security information, dates of birth, and street addresses. In July 2015, the Office of Personnel Management reported that an intrusion into its systems compromised the background investigation files of 21.5 million individuals. This was in addition to a separate but related incident that affected personnel records of about 4 million current and former federal employees, which the agency announced in June 2015. In September 2014, a cyber-intrusion into the United States Postal Service’s information systems may have compromised personally identifiable information for more than 800,000 of its employees. The Federal Information Security Management Act of 2002 (FISMA 2002) was enacted into law to provide a comprehensive framework for ensuring the effectiveness of information security controls over federal information resources. The law required each agency to develop, document, and implement an agency-wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency. Such a program includes assessing risk; developing and implementing cost- effective security plans, policies, and procedures; plans for providing adequate information security for networks, facilities, and systems; providing security awareness and specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; procedures for detecting, reporting, and responding to security incidents; and ensuring continuity of operations. The act also assigned oversight responsibilities to the Office of Management and Budget (OMB) and gave the National Institute of Standards and Technology (NIST) responsibility for developing standards and guidelines that include minimum information security requirements. The Federal Information Security Modernization Act of 2014 largely supersedes FISMA 2002. This law retains the requirements for agencies to develop, document, and implement an agency-wide information security program, as well as OMB oversight and NIST development of standards and guidelines. Its changes include requiring DHS to assist OMB with providing oversight by administering the implementation of information security policies and practices for information systems. DHS responsibilities include developing and overseeing the implementation of binding operational directives requiring agencies to implement OMB’s information security standards and guidelines; operating a federal information security incident center (previously OMB’s responsibility), which has been established as the DHS United States Computer Emergency Readiness Team (US-CERT); deploying technology, upon request by an agency, to continuously diagnose and mitigate against cyber threats and vulnerabilities; and conducting targeted operational evaluations, including threat and vulnerability assessments, on agency information systems. In January 2008, the President issued National Security Presidential Directive 54/Homeland Security Presidential Directive 23. The directive established the Comprehensive National Cybersecurity Initiative, a set of projects with the objective of safeguarding federal executive branch government information systems by reducing potential vulnerabilities, protecting against intrusion attempts, and anticipating future threats against the federal government’s networks. Under the initiative, DHS was to lead several projects to better secure civilian federal government networks, while other agencies, including OMB, the Department of Defense, and the Office of the Director of National Intelligence had key roles in other projects, including monitoring military systems and classified networks, overseeing intelligence community systems and networks, and spearheading advanced technology research and development. The initiative’s projects can be grouped into three focus areas: Establishing front lines of defense. This includes projects intended to protect the perimeter of federal networks, such as consolidating connections and deploying intrusion detection and prevention systems. Defending against full spectrum of threats. This includes physical and cyber projects intended to protect national security and intelligence- related information and systems across the federal government. Shaping the future environment. The initiatives in this area are focused on expanding cybersecurity education and research and development efforts for future technologies and cybersecurity strategies. As required by FISMA (both the 2002 and 2014 laws), NIST has developed standards and guidelines for agencies to develop, document and implement their required information security programs, select controls for systems, and conduct risk-based cyber threat mitigation activities. For example, NIST’s Special Publication 800-37 recommends cost-effectively reducing information security risks to an acceptable level and ensuring that information security is addressed throughout an information system’s life cycle. In addition, NIST Special Publication 800-94 establishes guidance for federal agencies to use when designing, implementing, and maintaining the systems they deploy to perform intrusion detection and prevention. DHS designated the National Protection and Programs Directorate to lead the national effort to strengthen the security and resilience of the nation’s physical and cyber-critical infrastructure, including supporting federal agencies in securing their information systems and information. Specifically, the directorate is responsible for enhancing the security, resilience, and reliability of federal agencies in the protection of the “.gov” domain of the federal civilian government. Within the National Protection and Programs Directorate, the Office of Cybersecurity and Communications, among other things, operates the National Cybersecurity and Communications Integration Center (NCCIC) that is to serve as a 24/7 cyber monitoring, incident response, and management center and as a national focal point of cyber and communications incident integration. The US-CERT, one of several subcomponents of the NCCIC, is responsible for operating the NCPS, which provides intrusion detection and prevention capabilities to covered federal agencies. The Network Security Deployment (NSD) division of the Office of Cybersecurity and Communications is responsible for developing, deploying, and sustaining NCPS. For example, the division is to deliver NCPS intrusion detection capability directly to federal agencies through Trusted Internet Connection Access Providers or through Internet service providers at Managed Trusted Internet Protocol Service locations. NCPS is an integrated system-of-systems that is intended to deliver a range of capabilities, including intrusion detection, intrusion prevention, analytics, and information sharing. The NCPS capabilities, operationally known as the Einstein program, are one of a number of tools and capabilities that assist in federal network defense. Originally created in 2003, NCPS is intended to aid DHS in its ability to help reduce and prevent computer network vulnerabilities across the federal government. Its analysts examine raw and summarized data from a wide variety of information sources to make determinations about potential attacks across the network traffic of participating federal agencies detected by NCPS. Table 3 provides an overview of the enhancements DHS has made to the original iteration of Einstein as well as the corresponding objective of NCPS the functionality supports. NCPS is intended to build successive layers of defense mechanisms into the federal government’s information technology infrastructures. When NCPS intrusion detection sensors are deployed at a Trusted Internet Connection location, the system monitors inbound and outbound network traffic, with the goal of allowing US-CERT, using NCPS and its supporting processes, to monitor all traffic passing between the federal civilian networks and the Internet for malicious activity. Figure 1 illustrates how Trusted Internet Connection portals interact with the NCPS intrusion detection sensors and the Internet. For more detailed information about NCPS’s development and functionality, see appendix II. As we reported in April 2015, DHS spent over $1.2 billion on the NCPS system through fiscal year 2014. Figure 2 below depicts the funds spent for NCPS over the past 6 budget years. NSD plans to use the fiscal year 2015 funding to sustain currently deployed capabilities and expand intrusion prevention, information- sharing, and analytics capabilities of NCPS. As of April 2015, the projected total life-cycle cost of the program was approximately $5.7 billion, through fiscal year 2018. The overarching objectives of NCPS are to provide functionality that supports intrusion detection, intrusion prevention, analytics, and information sharing. While NCPS’s ability to detect and prevent intrusions, analyze network data, and share information is useful, its capabilities are limited. For example, NCPS detects signature-based anomalies, but does not employ other, more complex methodologies and cannot detect anomalies in certain types of traffic. Further, the intrusion prevention capabilities can currently mitigate threats to a limited subset of network traffic. Regarding NCPS’s analytics function, DHS has deployed aspects of this capability and plans to develop more complex tools. However, information sharing, which is the fourth stated objective, has only recently been approved and funded for development; thus, current information sharing efforts are manual and largely ad hoc. In addition, although DHS established a variety of NCPS-related metrics, none provide insight into the value derived from the functions of the system. Developing such metrics poses a challenge for the agency, according to DHS officials. Until NCPS’s intended capabilities are more fully developed, DHS will be hampered in its abilities to provide effective cybersecurity-related support to federal agencies. NCPS’s intrusion detection capability is intended to provide DHS with the ability to scan network traffic for signs of potentially malicious activity. Effective intrusion detection provides an organization with the ability to detect abnormalities within network traffic and can be accomplished through the use of multiple types of intrusion detection methodologies. In order to more comprehensively and accurately detect malicious activity, NIST recommends using a combination of three detection methodologies: signature-based, anomaly-based, and stateful purpose analysis. Signature-based intrusion detection is able to detect malicious traffic by comparing current traffic to known patterns of malicious behavior, also referred to as signatures. This method is considered effective at detecting known threats and is the simplest form of intrusion detection, because it can only match against known patterns of malicious traffic. The anomaly-based and stateful purpose detection methodologies are more complex approaches, which involve comparing current network activity to predefined baselines of “normal behavior” to identify deviations which could be indicative of malicious activity. These approaches to intrusion detection are more effective than signature- based detection at identifying previously unknown threats, such as “zero-days,” as well as variants to known threats and threats disguised by the use of evasion techniques. NCPS uses only a signature-based methodology for detecting malicious activity. According to US-CERT officials, NCPS’s intrusion detection capability is supported by 228 intrusion detection sensors placed throughout the .gov network infrastructure. The sensors provide a flow of network traffic to be analyzed. Officials added that there are over 9,000 intrusion detection signatures deployed within NCPS, with approximately 2,300 that are enabled and being used to evaluate traffic at any given time. A majority of the signatures are available through commercially available products, though a portion is custom developed. According to DHS documentation and NSD officials, NCPS was always intended to be a signature-based intrusion detection system, and thus it does not have the ability to employ multiple intrusion detection methodologies. Further, NSD and US-CERT officials stated that NCPS is just one of the many tools available to federal agencies to help enhance their cybersecurity posture. They stated that it is the responsibility of each agency to ensure their networks and information systems are secure while it is the responsibility of DHS to provide a baseline set of protections and government-wide situational awareness, as part of a defense-in- depth information security strategy. By employing only signature-based intrusion detection, NCPS is unable to detect intrusions for which it does not have a valid or active signature deployed. This limits the overall effectiveness of the program. Moreover, given that many federal agencies use commercially available signature- based intrusion detection systems to support their information security efforts, the addition of another signature-based intrusion detection system may do little to provide customer agencies with a baseline set of protections. DHS officials acknowledged that the intrusion detection systems used by many federal agencies likely have more signatures deployed than NCPS. Thus, the agencies’ intrusion detection systems would be able to compare their network traffic against a larger set of potential exploits, such as exploits that US-CERT determined no longer needed to be scanned by NCPS. In other cases, US-CERT officials stated, some agencies do not possess their own robust intrusion detection capability and thus rely more on the intrusion detection functionality provided by NCPS. Regarding zero-day exploits, US-CERT officials stated there is no way to identify them until they are announced. Once they are announced, US- CERT can develop a signature, as was the case with Adobe Flash exploits that were recently publicly announced. While there are sources that can be used to buy zero day exploits, officials stated that DHS does not pay for zero days. Occasionally, US-CERT will receive notifications of exploits from partners before they go public, but these are mostly malware notifications. While we acknowledge the challenge of developing signatures for zero-day exploits, enhancing NCPS’s current intrusion detection approach to include functionality that would support the development of a baseline of network behavioral analysis, as described in NIST 800-94, would enhance DHS’s ability to combat such threats. According to NIST, many intrusion detection products have the ability to detect attacks carried out through various types of network traffic, such as traffic related to network browsers, e-mail, and file transfer, as well as traffic related to supervisory control and data acquisition (SCADA) control systems. In addition, intrusion detection systems should also have the ability to detect malicious activity across multiple layers of network protocols, including Internet Protocol Version 6 (IPv6). Further, NIST states that some intrusion detection products have the ability to detect characteristics of encrypted traffic (i.e., whether encryption had been applied) but not evaluate the traffic itself. Adversaries will often use encryption to mask malicious traffic to help better facilitate the successful execution of cyber-exploits, such as zero-day attacks. However, NCPS is not currently evaluating all types of network traffic. NSD and US-CERT officials stated there are currently no signatures deployed with NCPS that will detect threats embedded in certain types of network traffic. US-CERT officials stated that they have not deployed signatures related to these additional types of network traffic for various reasons. They stated that NCPS customer departments and agencies have not been clear on the details of the specific types of network traffic present within their organizations or the amount of traffic allowed to pass through their network gateways. According to an NSD official, they initially collect such data and hold meetings with officials from customer departments and agencies to exchange technical information, but the departments and agencies are responsible for routing network traffic to the NCPS sensors and not required to keep DHS abreast of changes. In addition, US-CERT officials stated that they have not conducted a detailed analysis of customer departments’ and agencies’ traffic to gain this understanding. Further, US-CERT officials stated that they were not equally concerned with the risk posed by all types of network traffic. Without an ability to analyze all types of traffic, DHS is unable to detect threats embedded in such traffic and increases the risk that agencies could be negatively impacted by such threats. According to NIST, signature-based intrusion detection systems depend on the quality of the signatures contained within them, and thus need to be updated to reflect new vulnerabilities and exploits that emerge. Organizations can purchase signatures from commercial vendors, custom develop them, or obtain them from open sources. NIST maintains the National Vulnerability Database (NVD), which is an open source of information that can influence many information security activities, including the development of intrusion detection signatures. Federal agencies are encouraged to use the information contained within the database as part of their information security efforts. In addition, US-CERT has acknowledged the importance of incorporating the use of common vulnerabilities and exposures (CVE) information in information security activities. In April 2015 US-CERT issued an alert which stated that cyber threat adversaries continue to exploit unpatched software products from vendors such as Adobe, Microsoft, and Oracle. Vulnerabilities in these products are often a common vector for spear phishing attacks. The alert stated that as many as 85 percent of these attacks are preventable through the implementation of patches. Accordingly, the bulletin contained 30 of the top targeted vulnerabilities and associated CVE information that security officials could use to implement within their organizations. However, the signatures supporting NCPS’s intrusion detection capability only identify a portion of vulnerabilities associated with common software applications from vendors such as Adobe, Microsoft, and Oracle. Specifically, we found that NCPS had limited coverage of vulnerabilities associated with 10 common client and server applications we evaluated. At the time of our review, NCPS intrusion detection capability signatures provided: partial coverage for 7 vulnerabilities, and no coverage for 2 vulnerabilities. reasonable coverage for 1 vulnerability, Further, for the 12 advanced persistent threats we evaluated, NCPS’s intrusion detection capability had signatures that at the time of our review provided: partial coverage for 4 advanced persistent threats. reasonable coverage for 8 advanced persistent threats, and More specifically, for the five client applications we reviewed (Adobe Acrobat, Flash, Internet Explorer, Java, and Microsoft office), the NCPS intrusion detection signatures provided some degree of coverage for approximately 6 percent of the total vulnerabilities selected for review (i.e., 29 of 489), with coverage for specific applications ranging from 1.2 to 80 percent of vulnerabilities identified in CVE reports published during 2014. Further, it is unknown how, if at all, US-CERT plans to leverage vulnerability data from other DHS sources to influence the development of intrusion detection signatures. For example, the Federal Network Resilience division is responsible for managing the Continuous Diagnostics and Mitigation program. The vulnerability information garnered from this program could be used to develop signatures that would target exploits that are affecting many federal agencies. US-CERT officials stated that they plan to use this information to influence NCPS, but could not provide specific details as how they plan to accomplish this due to the relative immaturity of the Continuous Diagnostics and Mitigation program. One reason that the signatures did not cover all identified vulnerabilities is that the current tool DHS uses to manage and track the status of intrusion detection signatures deployed within NCPS does not have the ability to capture CVE information. US-CERT officials stated that when they developed the Signature Management System tool, they were not required to create a link between a signature and a published CVE data. However, US-CERT has acknowledged this deficiency and stated this is something it plans to address in the future. In addition, US-CERT officials agreed with the results of our analysis of client vulnerabilities, but reiterated that the goal of NCPS was not to protect against all vulnerabilities. US-CERT officials stated that agencies with their own internal intrusion detection systems would likely be able to comprehensively address the common client vulnerabilities we selected. US-CERT officials stated that the overall intent of the system was to protect against nation-state level threat actors who often leverage “zero- day” exploits which may not have had a known mitigation or specific CVE assigned. Accordingly, officials stated, they must consider input from a variety of classified and unclassified sources, in addition to open source data such as CVEs, when developing their intrusion detection signatures. However, NCPS did not possess intrusion detection signatures that fully addressed all the advanced persistent threats we reviewed, which are often a type of exploit leveraged by nation-state actors. US-CERT officials added that they must consider a variety of factors when deciding which specific signatures to deploy and the length of time they keep the signatures active. For example, the current version of the software managing the intrusion detection function does not allow for custom rules at each sensor. As a result, the signatures deployed must be uniform across all sensors and cannot be tailored to a specific agency. This adds an additional layer of complexity when deciding how long to deploy signatures. For example, a smaller agency may be unaware of a particular threat or associated signature, and thus could benefit from having that signature deployed longer than a larger agency, which may view it as potentially duplicative of signatures employed by its own internal intrusion detection tool. Officials stated that they expect this issue to be addressed when they upgrade to the next version of the software that manages the intrusion detection function. We acknowledge that NCPS’s intrusion detection capabilities draw on many sources of vulnerability information and it should not necessarily duplicate capabilities that agencies already possess. However, updating the tool used to manage NCPS signatures to draw on and more clearly link to publicly available, open-source repositories of vulnerability information, such as the NVD, and using data from the Continuous Diagnostic and Mitigation program as they become available as an input into the development and management of signatures could add value by minimizing the risk that known vulnerabilities will be exploited. NCPS’s ability to provide intrusion prevention is another key objective of the system. Intrusion prevention is an additional technique recommended by NIST in support of effective information system monitoring. When fully developed, NCPS will have the ability to proactively mitigate threats across multiple types of network traffic. This is important because malicious actors can propagate threats across multiple vectors and types of network traffic. NCPS’s intrusion prevention capability provides DHS with the ability to proactively address network-based threats before they can potentially cause harm to federal networks. This is accomplished by monitoring network traffic to and from a customer agency’s network and taking some action to stop traffic (e.g., blocking an e-mail) that has characteristics matching pre-defined indicators of malicious traffic. NCPS has the ability to prevent intrusions in near real time, but is currently only able to proactively mitigate threats across a limited subset of network traffic (i.e., Domain Name System, or DNS, blocking and e- mail filtering) at a selected group of customer agencies. Consequently, there are other types of network traffic (e.g., web content), which are common vectors of attack not currently being analyzed for potentially malicious content. NSD officials noted that initial capabilities for intrusion prevention were intended to be more robust, but were scaled back due to a change in the program’s approach. Specifically, these officials stated that the original intent of the intrusion prevention deployment was to protect all types of network traffic with classified indicators. Further, the solution was supposed to provide government-furnished equipment to Internet service provider networks as the backbone of the intrusion prevention function of NCPS. However, NSD officials stated that, due to the excessive costs of operating and maintaining the original solution, the agency decided in May 2012 to change approaches. The new approach follows a managed service model, where the Internet service providers would receive classified indicators within their appropriate facilities and manage the prevention for their particular customer agencies. Because of this managed service model approach, NSD officials stated that the first set of prevention capabilities was based in part on the existing solutions provided by commercial service providers under the Defense Industrial Base Opt-In pilot (later renamed the Enhanced Cybersecurity Services). Officials also added that another motivation for the new approach was to create a cybersecurity marketplace where the various Internet service providers would compete with each other to provide better cybersecurity solutions for federal customers. DHS officials stated that they are developing prevention capabilities for other types of network traffic. Specifically, NSD officials stated that they plan to introduce the ability to filter web content by January 1, 2016. A review of a recent monthly report from one of the Internet service providers supporting NCPS intrusion prevention indicated that the contractor has begun work on web content filtering and provided DHS with a draft report on the indicators and overall process. Another key objective of NCPS is to provide DHS with an analytics capability. NIST recommends that organizations take a variety of actions with respect to analytics, including analyzing and correlating audit records across different repositories to gain organization-wide situational awareness, correlating information from nontechnical sources with audit information to enhance organization-wide situational awareness, analyzing the characteristics of malicious code, and employing automated tools to support near real-time analysis. The functionality deployed in support of NCPS analytics capability developed to date is in accordance with recommended standards. For example, the security information and event management solution, which has been operational since February 2012, simplifies cyber analysis by providing a centralized platform in which the log data from similar events can be aggregated, thereby reducing duplication. The tool also facilitates analysts’ ability to correlate related events that might otherwise go unnoticed and provides visualization capabilities, making it easier to see relationships. Additionally, NSD has established functionality that enables the analysis of the characteristics of malicious code. For example, the Packet Capture tool enables US-CERT analysts to see “inside” the packet, and inspect the payload to analyze a specific cyber threat. Further, the Digital Media Analysis Environment (Forensics) and Advanced Malware Analysis Center provide mechanisms to collect and contain information on cyber threats in a highly secure environment for evaluation by US-CERT analysts. NSD and US-CERT officials stated that DHS initially focused funding and development efforts on analytical functions associated with supporting the intrusion detection and prevention functions of NCPS. However, the more complex analytics development is planned for later stages of system development. Specifically, DHS has enhancements planned through fiscal year 2018. These planned enhancements are intended to better facilitate the near real-time analysis of various data streams and advanced malware behavioral analysis, and to conduct forensic analysis in more collaborative way. Information sharing is a key control recommended by NIST in support of effective information system security. Additionally, the presence of good information sharing, particularly the ability to effectively notify an affected entity of potentially malicious activity, is a key component of effective intrusion detection and prevention, and thus a key objective of NCPS. Further, NIST states that organizations should develop standard operating procedures to ensure that consistent and accurate information is available for reporting and management oversight. Also, US-CERT’s Concept of Operations for NCPS establishes monitoring the status of mitigation actions and strategies as a requirement of the program. NSD officials stated that the information-sharing capability has only recently been approved and funded for development, and thus current information-sharing efforts are manual and largely ad hoc. DHS first requested funding for the development of information-sharing capabilities in 2010, but NSD officials stated the effort was given a lower priority than the intrusion prevention capability and was not funded to begin planning activities until 2014. As a result, DHS has yet to develop a majority of planned functionality for the information-sharing capability of NCPS. Though the operational requirements for the NCPS information-sharing functionality were approved in November 2014, DHS did not formally authorize NSD to initiate development of the capability until August 2015. As a result no substantive actions have yet been taken to develop this capability. Regarding the current information-sharing efforts, officials from the five customer agencies we reviewed stated that DHS is not always effectively communicating its intrusion detection notifications to customer agencies. Specifically, DHS officials provided evidence that they sent 74 incident notifications that they believed were related to NCPS to the five agencies in our review during fiscal year 2014. However, evidence provided by the agencies showed that only 56 of these notifications had been received by the customer agencies. The five impacted agencies and DHS disagreed as to whether the other 20 incident notifications had been sent and received. Specifically, for 18 of 20 these notifications, DHS provided evidence that an e-mail may have been sent, but the agencies had no record of receiving the notifications. For the 2 additional notifications, one customer agency had a record of receiving them; however, DHS had no evidence of transmitting the e-mails. For the 56 NCPS-related notifications that the five agencies acknowledged receiving, the agencies stated that 31 incidents notifications were timely and useful, 10 incidents notifications were not timely or useful, 7 incident notifications were identified by agency officials as false 7 incident notifications were not related to an NCPS intrusion- detection. Additionally, DHS did not always solicit, and agencies did not always provide, feedback on the notifications. Specifically: Of the 56 incident notifications mentioned above, DHS requested that the impacted agency provide feedback on 36 of them. Of these 36, the agencies stated that they provided feedback on 15 notifications, but did not provide feedback on 21. For an additional 10 notifications, officials from 3 of 5 agencies stated they provided feedback even though DHS had not explicitly requested follow-up action. For an additional 10 notifications, DHS did not request feedback and the customer agencies did not provide any. As another channel for sharing information, US-CERT holds weekly calls with representatives of the security operation centers of various federal agencies. These calls provide a forum for the voluntary exchange of a variety of information security information, including NCPS-related information. Officials from the five customer agencies involved in our review expressed value in the information received from these discussions. One reason DHS and agencies do not agree about whether notifications were received may be that DHS does not always explicitly ask for feedback or confirmation of receipt of the notification. Additionally, officials from one customer agency stated that DHS has no way of determining which of its analysts were responsible for transmitting a particular notification, so it is difficult to obtain context after a notification is sent. US-CERT officials stated that standard operating procedures and a quality control procedure are being developed as part of the implementation of a new version of the incident management database. However, these procedures were not developed during the scope of our review, fiscal year 2014. In August 2015, US-CERT provided us with a draft standard operating procedure related to the incident notification process. The policy provides an overview of the types of questions a US- CERT analyst should ask a customer agency when transmitting a notification. However, the draft policy does not instruct them specifically to include a solicitation of feedback within the notification. Further, US- CERT could not provide any information regarding the timetable for when these procedures would take effect. Regarding the usefulness of the notifications, two of the agencies in our review stated that because of the placement of the intrusion detection sensors on their networks, a significant amount of effort was required to evaluate the context of the DHS notifications. Thus, both agencies stated, and DHS agreed, the value of the notifications could be enhanced by giving US-CERT analysts access to the agencies’ network diagrams, which could allow them to identify the specific location of the intrusion. Officials from customer agencies stated that they did not provide feedback for a variety of reasons. For example, one agency stated that due to its federated nature, getting a response from the impacted entity within their agency was a challenge and could only be rectified by reaching out to site owners for every incident notification they receive. Consequently, officials stated, they typically only reached out when the notification had met the threshold of a security event. Officials from this agency stated that they had instituted a new standard operating procedure that requires the analyst processing the incident notification to reach out to DHS prior to closing it out. They added that this policy went into effect after fiscal year 2014 and did not impact the data set we reviewed. An additional agency stated that a request for feedback is not always clearly stated within the notification they receive from US-CERT. Without verifying the receipt of intrusion detection notifications and soliciting feedback on their usefulness, DHS may be hindered in assessing the effectiveness of NCPS’s current information-sharing capabilities. According to NIST, a number of laws—including the Federal Information Security Management Act—cite performance measurement in general, and information security performance measurement in particular, as a requirement. Further, NIST 800-55 states that an information security measurement program provides a number of organizational and financial benefits, including increased accountability for information security performance, improved effectiveness of information security activities, demonstrated compliance with laws, and quantified inputs and allocation decisions. Further, effectiveness or efficiency measures are used to monitor if program-level processes and system-level security controls are implemented correctly, operating as intended, and meeting the desired outcome. Metrics for NCPS, as provided by DHS, do not provide information about how well the system is enhancing government information security or the quality, efficiency and accuracy of supporting actions. DHS has established three department-wide NCPS-related performance metrics, as part of its Performance and Accountability report: Percentage of traffic monitored for cyber intrusions at civilian federal executive branch agencies: According to Executive Program Management Office and NSD officials, this measure assesses NCPS’s intrusion detection capability by providing information on the scope of coverage for potentially malicious cyber-activity across participating civilian federal government agencies. During fiscal year 2014, DHS reported that approximately 88.5 percent of the total Internet traffic of 23 civilian, executive branch agencies was monitored by NCPS intrusion detection sensors. Though this metric provides insight into the amount of federal executive-branch traffic that NCPS is able to provide intrusion detection for, it does not provide insight into the quality or efficiency of the intrusion detection function for that traffic. Percentage of incidents detected by US-CERT that targeted agencies are notified of within 30 minutes: According to Executive Program Management Office and NSD officials, this is an additional measure of NCPS’s intrusion detection capability. Specifically, DHS documentation stated that there were 297 cyber incidents identified on federal networks using the NCPS’s intrusion detection capability in fiscal year 2014. The average time to notify impacted agencies was 18 minutes, with 87.2 percent (259 of 297) of notifications occurring within 30 minutes. While this metric provides insight into the speed at which DHS could share information related to detected incidents, it does not provide a measure for the accuracy or value of those notifications. Further, of 24 incident notifications for the five selected agencies that support this metric, DHS could not provide evidence that 12 of these notifications were sent. Without appropriately sharing the notifications with the affected agency, we are unsure how DHS classifies these 12 notifications as incidents. Percentage of known malicious cyber traffic prevented from causing harm at federal agencies: According to Executive Program Management Office and NSD officials, this measure assesses NCPS’s intrusion prevention capability. Specifically, DHS documents stated that each currently deployed indicator of a malicious threat is paired with a countermeasure to prevent the malicious threat from harming those networks. In fiscal year 2014, 389 indicators were deployed amongst intrusion prevention sensors. Though this metric would track whether a particular countermeasure was engaging (i.e., if prevention occurred) it does not necessarily evaluate the effectiveness or efficiency of the intrusion prevention capability. DHS officials agreed with this observation and stated that the agency was in the process of retiring this metric and developing a new one that would better measure and evaluate the effectiveness of intrusion prevention. Further, NSD has established key performance parameters that provide an indication of the system’s ability to perform functions supporting NCPS’s objectives. For example, the following measures were developed to track the performance of the intrusion detection function: Detect known cyber events through automated intrusion detection within 1 minute of event occurrence. Provide automated notification within the operations center that a cyber event took place within 1 minute of event detection. Aggregate and correlate detected cyber events for known indicators within 30 minutes of event notification. While these are valuable for determining how NCPS is operating as a system, officials from the Executive Program Management Office and NSD agreed that they did not provide a qualitative or quantitative assessment of the system’s ability to fulfill the aforementioned objectives. Further, as we reported in April 2015, a DHS acquisition official questioned whether the NCPS key performance parameters were defined properly. Regarding the system’s benefits, NSD and US-CERT officials stated that the total amount of incident notifications sent to customer agencies does indicate that NCPS is providing value. However, as our analysis of a selected group of customer notifications from fiscal year 2014 indicates, customer agencies do not perceive every notification transmitted as valuable. Without the deployment of comprehensive measures, DHS cannot appropriately articulate the value provided by NCPS. While DHS developed an executive road map for the intrusion detection, prevention, analytics, and information sharing objectives that describes future NCPS capabilities to be developed through fiscal year 2018, it has not defined requirements, as called for by OMB guidance and best practice, for two intrusion detection capabilities to be provided in fiscal year 2016. In addition, although DHS officials stated that they do consider threat information as part of the required risk-based approach for determining future capabilities to protect federal information systems, they do not consider specific vulnerabilities affecting agencies’ networks and systems, as information on these is not currently available. The lack of vulnerability information prevents DHS from taking a full risk-based approach to designing future NCPS intrusion prevention capabilities. OMB’s Capital Programming Guide states that requirements should be developed to support program budgeting activities. The guidance also states that agencies should avoid “specification creep,” where requirements become uncontrolled by defining requirements to meet future potential needs or incorporating emerging technology that would be “nice” to have. Further, a recognized best practice in requirements development from the Software Engineering Institute notes that requirements should be expressed in a way that can be used for design decisions. NSD maintains a road map that is used to track potential additional capabilities for NCPS’s intrusion detection, intrusion prevention, analytics, and information-sharing objectives to be developed in future fiscal years, up to fiscal year 2018. For each NCPS objective, the Executive Road Map identifies the current state of operations (“as-is”) and the desired state of operations (“target”). According to NSD officials, this road map facilitates discussions with senior DHS management, and is revised at several points in the fiscal year. The road map identifies technology and techniques that may increase the department’s ability to perform activities to support the four objectives. For example, DHS plans to begin work on a “web gateway proxy scan encryption” capability in fiscal year 2016. DHS also plans to seek funding for a wireless network protection capability in fiscal year 2018, which may add an additional type of intrusion detection and prevention technology described in guidance issued by NIST. Requirements have not been fully defined for all items in the road map. Specifically, two capabilities DHS stated will be provided in fiscal year 2016—expanding the intrusion detection capability to identify malware present on customer agency internal networks and identifying malicious traffic entering and exiting cloud-based service provider services—are based on requirements that have not been fully defined. NSD officials stated that these capabilities were based upon the requirement to detect intrusion attempts in near real time across the federal government. They added that identifying malware on customer agency internal networks and malicious traffic entering and exiting cloud- based service providers is a logical expansion of responding to the cyber threat, and the program office needs flexibility to adapt to the threat. However, these capabilities could represent a significant departure from the version of NCPS currently deployed and envisioned in the governance documents. Specifically, the technical nature of cloud computing—where customer agency data may be stored and accessed by multiple physical sites—and the number of cloud service providers that could be used by customer agencies may require a different infrastructure deployment methodology than the existing NCPS sensor deployments at Internet service providers and at customer agency locations. Further, while the Executive Road Map indicates that NCPS will detect malware on customer agency internal networks using log data from DHS’s Continuous Diagnostics and Mitigation program, it is unclear how DHS plans to accomplish this. Until it fully defines requirements for these two capabilities, DHS increases the risk that it will invest in functionality that does not effectively support information security efforts at the customer agencies and across the federal government. The Federal Information Security Modernization Act of 2014 and guidance issued by NIST call for a risk-based approach to protecting federal systems. According to NIST’s Guide for Conducting Risk Assessments, information security risk is assessed by considering the threats posed to the federal government, the vulnerabilities (or weaknesses) in information systems, the impact (or harm) that could occur given the potential for threats exploiting vulnerabilities, and the likelihood that threats would use the exploits to allow harm to occur. DHS has incorporated selected elements of a risk-based approach when considering the next capabilities of the NCPS intrusion prevention objective. Specifically, NSD coordinated and leveraged threat information from the National Security Agency and the National Cybersecurity and Communications Integration Center, along with information provided by the Internet service providers, to develop a list of countermeasures that DHS believed would be reasonable to implement. NSD officials stated that decisions regarding existing and upcoming countermeasures were made based on the capabilities of the Internet service providers. Specifically, the e-mail and DNS countermeasures were used as the first countermeasures for the NCPS intrusion prevention capability because they were already deployed at Internet service providers as part of the Enhanced Cybersecurity Services program. However, NSD did not consider and does not currently have access to vulnerability information for the agency information systems it is helping to protect. NSD officials stated that vulnerability data about customer agency information systems and networks are difficult to obtain. For example, agency information security reports required under the Federal Information Security Modernization Act of 2014 do not contain vulnerability information that NSD could use to inform future capabilities. Also, NSD officials stated—and the Executive Road Map confirmed—that DHS’s Continuous Diagnostics and Mitigation program may provide additional vulnerability information that could be valuable in determining future capabilities. However, at this time the program is relatively immature and NSD had not developed processes and procedures on how to use this vulnerability information to inform decisions on future capabilities at the time of our review. Further, DHS also has a separate program to collect vulnerability information on federal executive branch agency systems and networks that could be useful for determining future NCPS intrusion prevention capabilities. In an October 2014 memo, OMB directed DHS to scan Internet-accessible addresses and public-facing segments of federal civilian agency systems for vulnerabilities on an ongoing basis, and report to OMB on the identification and mitigation of vulnerabilities across federal agency information systems. However, NSD did not provide details—including processes and procedures—of how this information could be used to inform future NCPS intrusion prevention capabilities. Until the department develops processes and procedures for using such vulnerability information, DHS will not be able to adopt an effective risk- based approach for planning future NCPS intrusion prevention capabilities. OMB Memorandum M-08-05 established the requirement for almost all federal executive branch agencies to implement the intrusion detection capabilities (within Einstein 2) of NCPS. In July 2015, the White House noted that deployment of the NCPS intrusion prevention capabilities was to be accelerated, with DHS awarding a contract to provide intrusion prevention services for all federal civilian agencies by the end of 2015. Agencies have had mixed results in adopting NCPS capabilities. According to DHS program documentation, all 23 of the non-defense CFO Act agencies had routed traffic to NCPS intrusion detection sensors. However, NSD documents indicated that only 5 of the 23 agencies were receiving intrusion prevention services. Further, NSD documents showed that for 3 of these 5 agencies, adoption of intrusion prevention services for e-mail was limited—only 1 agency appeared to have fully adopted intrusion prevention for e-mail service, another agency had adopted intrusion prevention for only one part of its network e-mail, and a third agency was just beginning to adopt the e-mail service. Further, four of the five selected agencies in our review reported that not all of their traffic was being sent to NCPS intrusion detection sensors. In addition, two of the selected agencies reported that they had adopted the DNS intrusion prevention service, and only one had completed the adoption process for its e-mail service. See table 4 below for a summary of NCPS intrusion detection and prevention adoption at the selected five agencies. Officials from NSD, the selected agencies in our review, and the Internet service providers identified several policy and implementation challenges to adopt the NCPS intrusion prevention capabilities, along with efforts to address these challenges: Approval of memoranda of agreement (MOA): An MOA is required in order to establish NCPS service for an agency. Among other things, the MOA identifies responsibilities for both DHS and the customer agency (including interactions with Internet service providers), as well as identifies points of contact for the respective organizations. Sixteen of the 23 non-defense CFO Act agencies had an MOA in place with DHS to provide intrusion prevention services. Three of the five agencies in our review were in the process of approving an MOA for intrusion prevention services and cited barriers to approving the agreement. Specifically, two of these agencies did not sign an agreement because their Internet service providers had not been capable of providing NCPS intrusion prevention services. NSD officials stated that they are in the process of accelerating the availability of Internet service providers to those agencies which are not currently provided NCPS intrusion prevention capabilities. Officials from the third agency stated that there were questions whether existing law protecting sensitive information in its possession prohibited participation in NCPS intrusion prevention or not. In July 2015, officials from this agency stated that, working with DHS, it had agreed to adopt some NCPS intrusion prevention capabilities. Agency capabilities and concerns: NSD officials noted that the ability to meet DHS security requirements (e.g., encrypted tunnels) to use the intrusion prevention capabilities varies from agency to agency. NSD officials also stated that because each agency has unique network infrastructures, implementation must be specific to that agency. Further, NSD officials added that agencies are generally concerned about interfering with any mission-critical applications, such as e-mail. Also, while chief information officers usually sign the MOA, NSD officials noted that network operators within the agency can be unaware of the agreement, which can pose a potential barrier to full deployment. To address these issues, NSD staff stated that they work with agencies to tailor implementation and explain details of the prevention capabilities to reassure them that business operations will not be impeded. Additionally, officials from one agency in our review that had adopted the DNS intrusion prevention capability initially hesitated to adopt the e-mail capability due to records management concerns. Agency officials stated in July 2015 that they are in the process of working with DHS to adopt the e-mail intrusion prevention capability. Viability of solution for cloud e-mail: Officials from one agency in our review stated that they obtain e-mail services from cloud providers, and added that they hesitate to participate in NCPS intrusion prevention e-mail capability because there is currently no solution that is easily implemented. Officials from another agency in the process of signing an MOA stated that they also use cloud service providers for e-mail. This agency will also not be able to implement the e-mail intrusion prevention capability. NSD has noted the challenges associated with implementing a cloud solution, but plans to refine this capability over time. However, as we previously stated, the plans to initiate development efforts on a cloud solution during fiscal year 2016 are not based on fully developed requirements. Development and operational challenges at Internet service providers: NSD and two of the Internet service providers noted a challenge with designing, developing, and operating a classified infrastructure on unclassified network traffic. For example, the complex and changing security requirements of one of DHS’s partners who provides threat information created delays in the service providers’ ability to deliver intrusion prevention capabilities. In addition, obtaining and retaining personnel with appropriate security clearances posed a challenge for the Internet service providers. NSD has acknowledged the inherent complexity of using classified information to address cyber risks in non-classified network traffic and has ongoing efforts to work with the Internet service providers to address this. Further, NCPS faces additional implementation challenges in ensuring that agency traffic is sent to the intrusion detection sensors. Specifically, four of the five agencies in our review cited several challenges in routing all of their traffic through NCPS intrusion detection sensors, including capacity limitations of the sensors, agreements with external business partners that use direct network connections, interagency network connections that do not route through Internet gateways, use of encrypted communications mechanisms, and backup network circuits that are not used regularly. NSD officials stated that agencies are responsible for routing their traffic to the intrusion detection sensors, and DHS does not have a role in that aspect of NCPS implementation. NCPS also faces a challenge in implementing a portion of the intrusion detection capability and all of the intrusion prevention capability when routing traffic through sensors at the Internet service providers. Of the five agencies in our review, four depend on their Internet service provider to receive NCPS intrusion detection services (through the Managed Trusted Internet Protocol Service program) and/or intrusion prevention services. Two of these four agencies had taken steps to securely route traffic to the sensors, while one agency did not implement an authentication mechanism to ensure that network routes received by their router were legitimate. The other agency stated that its Internet service provider managed its routing configurations and did not provide evidence for us to verify if secure routing configurations were in place. This occurred in part because NSD did not provide guidance to customer agencies on how to securely route their information to the Internet service providers. NSD officials stated that providing network routing guidance to customer agencies is not the role of DHS. Rather, they believe that is best handled by the customer agency and their Internet service provider. However, without providing network routing guidance, NSD has no assurance the traffic they see constitutes all or only a subset of the traffic the customer agencies intend to send. Further, by not providing routing guidance, NSD has less assurance that customer agency traffic will actually be picked up at the sensors, since the routing may bypass those sensors, reducing the effectiveness of NCPS. DHS has devoted significant resources to developing and deploying NCPS, with the goal of strengthening agencies’ ability to detect and prevent intrusions on their networks, as well as the capability for analyzing network activity and sharing information between DHS and agencies. The system’s intrusion detection capabilities are the most fully developed of the four system objectives, and they provide the ability to detect known malicious patterns of activity on agency networks. However, without the ability to effectively detect intrusions across multiple types of traffic or provide other types of detection capabilities, such as anomaly- based and stateful purpose detection, NCPS is limited in its ability to identify potential threats. In addition, without making use of publicly available, open-source repositories to enhance the system’s signatures and data available from its Continuous Diagnostics and Mitigation program, DHS may not be providing the ability to detect attacks that exploit known vulnerabilities. The system’s intrusion prevention capability is less fully developed, with limited deployment across different types of network traffic, such as content from websites, limiting its ability to prevent malicious code from penetrating agencies’ networks. Further, NCPS’s support of a number of analytics capabilities, and ongoing efforts to enhance these, should provide DHS and agencies with improved ability to analyze potentially malicious traffic in a timely and efficient manner. However, DHS’s sharing of information with agencies has not always been effective, with disagreement among agencies about the number of notifications sent and received and their usefulness. Finalizing the incident notification process, to include the solicitation of feedback from customer agencies, could help ensure that DHS is effectively communicating information that helps agencies strengthen their security posture. Another step that could assist in ensuring the effectiveness of NCPS is developing metrics that measure the quality, efficiency, and accuracy of the services it provides. DHS has continued to plan for future capabilities of the system, but without clearly defined requirements, it risks investing in functionality that does not effectively support agency information security. Moreover, to ensure a risk-based approach is being pursued to select future NCPS capabilities, information about vulnerabilities on agency networks could be a valuable input. The effectiveness of NCPS further depends on its adoption by agencies. While the adoption of the intrusion detection capabilities is widespread among the 23 agencies required to use NCPS, the implementation of intrusion prevention capabilities is more limited due to policy and implementation challenges that DHS is working to overcome. However, addressing a lack of guidance for routing network traffic through NCPS sensors could help better ensure a wider and more effective use of NCPS capabilities. We recommend the Secretary of Homeland Security direct: NSD to determine the feasibility of enhancing NCPS’s current intrusion detection approach to include functionality that would detect deviations from normal network behavior baselines; NSD to determine the feasibility of developing enhancements to current intrusion detection capabilities to facilitate the scanning of traffic not currently scanned by NCPS; US-CERT to update the tool it uses to manage and deploy intrusion detection signatures to include the ability to more clearly link signatures to publicly available, open-source data repositories; US-CERT to consider the viability of using vulnerability information, such as data from the Continuous Diagnostics and Mitigation program as it becomes available, as an input into the development and management of intrusion detection signatures; US-CERT to develop a timetable for finalizing the incident notification process, to ensure that customer agencies are being sent notifications of potential incidents, which clearly solicit feedback on the usefulness and timeliness of the notification; The Office of Cybersecurity and Communications to develop metrics that clearly measure the effectiveness of NCPS’s efforts, including the quality, efficiency, and accuracy of supporting actions related to detecting and preventing intrusions, providing analytic services, and sharing cyber-related information; The Office of Cybersecurity and Communications to develop clearly defined requirements for detecting threats on agency internal networks and at cloud service providers to help better ensure effective support of information security activities; NSD to develop processes and procedures for using vulnerability information, such as data from the Continuous Diagnostics and Mitigation program as it becomes available, to help ensure DHS is using a risk-based approach for the selection/development of future NCPS intrusion prevention capabilities; and NSD to work with their customer agencies and the Internet service providers to document secure routing requirements in order to better ensure the complete, safe, and effective routing of information to NCPS sensors. We provided a draft of this report to the Departments of Homeland Security, Energy, and Veterans Affairs; the General Services Administration; the Nuclear Regulatory Commission; and the National Science Foundation for their review and comment. In written comments signed by the Director, Departmental GAO-OIG Liaison Office, DHS concurred with each of our nine recommendations. DHS also provided details about steps that it plans to take to address eight of the nine recommendations, including estimated time frames for completion. If effectively implemented, these actions should help address the weaknesses we identified in the NCPS program. Regarding our recommendation to develop clearly defined requirements for detecting threats on agency internal networks and at cloud service providers, the Director asked that we consider it resolved and closed because a formal requirements working group and requirements management process had been developed. We will review the evidence and determine if these actions address the recommendation. DHS’s written comments are reprinted in appendix III. Officials from DHS also provided technical comments via e-mail, which we incorporated as appropriate. Officials from the Departments of Energy and Veterans Affairs, General Services Administration, Nuclear Regulatory Commission, and the National Science Foundation stated that they had no comments. We are sending copies of this report to the appropriate congressional committees, the departments and agencies in our review, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory Wilshusen at (202) 512-6244 or wilshuseng@gao.gov or Dr. Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to determine the extent to which (1) the National Cybersecurity Protection System (NCPS) meets stated objectives, (2) the Department of Homeland Security (DHS) has designed requirements for future stages of the system, and (3) federal agencies have adopted the system. To determine the extent to which NCPS meets stated objectives, we compared four of the overarching capabilities of the system (intrusion detection, intrusion prevention, analytics, and information sharing) to leading federal practices, including the National Institute of Standards and Technology’s Special Publication 800-53: Security and Privacy Controls for Federal Information Systems and Organizations; Special Publication 800-55: Performance Measurement Guide for Information Security, and Special Publication 800-94: Guide to Intrusion Detection and Prevention Systems (IDPS). We also examined program information and documents, as well as interviewed DHS officials within the Office of Cybersecurity and Communications responsible for designing, developing, maintaining, and operating NCPS. For the information-sharing objective we examined NCPS-related incident notifications DHS stated were sent by the United States Computer Emergency Readiness Team in fiscal year 2014 to five selected Chief Financial Officers Act agencies: the Departments of Energy and Veterans Affairs, the General Services Administration, the National Science Foundation, and the Nuclear Regulatory Commission. These agencies were selected based on information provided by DHS regarding the relative number of NCPS-related incident notifications sent to the agencies (one with a higher amount of notifications, two with around the median amount of notifications, and two with the fewest amount of notifications) and NCPS capabilities received. We also interviewed information security staff from each of these agencies and collected information regarding each agency’s perceived usefulness and timeliness of the incident notifications, along with any feedback provided in response to the notification. To evaluate the intrusion detection signatures deployed, we selected 10 common vulnerabilities from 2014 commonly affecting client and server applications and determined the extent to which the NCPS signatures provided reasonable coverage for the vulnerability the signature was intended to mitigate. Additionally, we conducted a similar evaluation for the signatures associated with a selection of 12 common advanced persistent threats from 2014. Further we evaluated the number of intrusion detection signatures DHS had issued for each client vulnerability during fiscal year 2014 and compared them to the number of signatures from publicly available repositories, such as common vulnerabilities and exposures (CVE) published for each corresponding category of vulnerability during the same period. We then determined the percentage of DHS coverage by comparing the number of signatures DHS had that addressed each of the vulnerabilities to the total number of CVEs released in 2014 for that category. To determine the extent to which DHS has designed requirements for future stages of the system, we reviewed NCPS program planning documentation and interviewed program officials in order to identify how future capabilities are planned. We compared this information to federal guidance for planning and requirements development found in the Office of Management and Budget’s Capital Programming Guide. In addition, for all new capabilities identified for funding in DHS’s fiscal year 2016 funding request (such as expanding information sharing, streaming and near-real- time analytics, and deploying intrusion detection sensors at Internet service providers’ traffic aggregation sites), we determined if formalized requirements supporting these capabilities had been documented and approved in program documentation. Further, we determined if plans for future capabilities to address NCPS’s intrusion prevention objective were determined using a risk-based approach, including a consideration of threat, vulnerability, impact, and likelihood. To determine the extent to which federal agencies have adopted NCPS, we reviewed policy issued by the Office of Management and Budget and DHS documentation (such as memoranda of agreement) for the 23 non- defense agencies identified in the Chief Financial Officers Act. We also discussed any challenges to adoption with DHS officials. To gain a better understanding of how federal agencies adopt the system, including the amount of traffic and any challenges or limitations associated with adoption, we interviewed officials from the five Chief Financial Officers Act agencies identified previously and reviewed agency network documentation. We also interviewed officials from the three Internet service providers currently participating in NCPS to obtain their perspective on agency adoption of the system. We conducted this performance audit from June 2014 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The National Cybersecurity Protection System (NCPS), operationally known as the Einstein program, is an integrated system-of-systems that is intended to deliver a range of capabilities, including intrusion detection, intrusion prevention, analytics, and information sharing. The sensors deployed to support the 2003 version of NCPS, or Einstein 1, collect network flow records of data entering and exiting participating agencies’ networks, which are to be analyzed by U.S. Computer Emergency Readiness Team (US-CERT) analysts and tools to detect certain types of malicious activity. If the system detects malicious activity, US-CERT analysts are to coordinate with the appropriate agencies to support the mitigation of those threats and vulnerabilities. US- CERT also is to use the information from the sensors to create analyses of cross-governmental trends that offer agencies an aggregate picture of external threats against the federal government’s networks. In 2009, the Department of Homeland Security (DHS) incorporated network intrusion detection technology into the capabilities of the initial version of the system, enabling NCPS to monitor Einstein 1 network data from participating federal agencies for specific predefined patterns of known malicious activity, referred to as signatures. The NCPS intrusion detection capability, or Einstein 2, is to use signatures derived from numerous sources, such as commercial and public computer security information, incidents reported to US-CERT, information from federal partners, and independent US-CERT in-depth analysis. When NCPS’s intrusion detection function detects traffic consistent with malicious patterns denoted by a particular signature, it provides US-CERT analysts with a notification. The analyst is then to investigate the detection to determine if it was in fact an incident and provide mitigation support to the affected agency, as appropriate. In 2013, DHS’s Network Security Deployment division (NSD) began deployment of an initial operational capability of the intrusion prevention function, operationally known as Einstein 3A, which is intended to support DHS’s ability to actively defend .gov network traffic. One of the major components supporting the capability is the “Nest,” which is a classified facility located at each of the participating Internet service providers that is responsible for off-ramping (i.e., routing traffic to the Nest from the agency) and on-ramping (i.e., routing traffic from the Nest back to the Internet) .gov traffic. DHS shares specific indicators of malicious activity with Internet service providers, who then configure the indicators into signatures for testing and implementation and match patterns against established indicators based on known or suspected malicious traffic traveling to or from the participating agencies. Table 5 below highlights additional intrusion prevention functions currently available in NCPS. Once fully deployed across the government, NCPS is intended to leverage available information from commercial and government sources to apply in-line protection measures to a wide set of federal network traffic protocols. When a signature detects a known or suspected cyber threat, NCPS is supposed to act on that threat to stop malicious traffic and prevent harm to the intended targets. Figure 3 provides an overview of how NCPS intrusion prevention capability is designed to work. NCPS’s analytic capability is intended to capture, organize, and analyze data collected from NCPS sensors and other data feeds. Table 6 below highlights key analytics functions currently available in NCPS. These capabilities are expected to enable US-CERT to fuse information and correlate malicious network activities across participating federal executive branch agencies to achieve situational awareness of high- profile cyber threats. US-CERT is responsible for sharing situational awareness about current and potential cybersecurity threats and vulnerabilities with federal agencies, state and local governments, private sector partners, infrastructure owners and operators, and the public. NCPS’s information-sharing capability is intended to enable enhanced sharing of information between DHS and its partners through real-time or near-real-time response; collaboration and coordination; and analysis of network intrusion attempts, suspicious intrusion activity, and analytical best practices. When fully developed, NCPS information sharing is intended to promote the rapid exchange of appropriate cyber threat and cyber incident information among NCCIC cybersecurity analysts and their cybersecurity partners, at multiple classification levels. Further, the capabilities are intended reduce time required to respond to incidents with better coordination and collaboration, and improved efficiencies with more automated information sharing and exposure of analysis capabilities. In addition to the contacts named above, Lon C. Chin, Michael W. Gilmore, Harold Lewis, Christopher Warweg (assistant directors); Andrew Banister, Bradley Becker, Christopher Businsky, Kush K. Malhotra, Lee McCracken, and David Plocher made key contributions to this report. | Cyber-based attacks on federal systems continue to increase. GAO has designated information security as a government-wide high-risk area since 1997. This was expanded to include the protection of critical cyber infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. NCPS is intended to provide DHS with capabilities to detect malicious traffic traversing federal agencies' computer networks, prevent intrusions, and support data analytics and information sharing. Senate and House reports accompanying the 2014 Consolidated Appropriations Act included provisions for GAO to review the implementation of NCPS. GAO determined the extent to which (1) the system meets stated objectives, (2) DHS has designed requirements for future stages of the system, and (3) federal agencies have adopted the system. To do this, GAO compared NCPS capabilities to leading practices, examined documentation, and interviewed officials at DHS and five selected agencies. This is a public version of a report that GAO issued in November 2015 with limited distribution. Certain information on technical issues has been omitted from this version. The Department of Homeland Security's (DHS) National Cybersecurity Protection System (NCPS) is partially, but not fully, meeting its stated system objectives: Intrusion detection: NCPS provides DHS with a limited ability to detect potentially malicious activity entering and exiting computer networks at federal agencies. Specifically, NCPS compares network traffic to known patterns of malicious data, or “signatures,” but does not detect deviations from predefined baselines of normal network behavior. In addition, NCPS does not monitor several types of network traffic and its “signatures” do not address threats that exploit many common security vulnerabilities and thus may be less effective. Intrusion prevention: The capability of NCPS to prevent intrusions (e.g., blocking an e-mail determined to be malicious) is limited to the types of network traffic that it monitors. For example, the intrusion prevention function monitors and blocks e-mail. However, it does not address malicious content within web traffic, although DHS plans to deliver this capability in 2016. Analytics: NCPS supports a variety of data analytical tools, including a centralized platform for aggregating data and a capability for analyzing the characteristics of malicious code. In addition, DHS has further enhancements to this capability planned through 2018. Information sharing: DHS has yet to develop most of the planned functionality for NCPS's information-sharing capability, and requirements were only recently approved. Moreover, agencies and DHS did not always agree about whether notifications of potentially malicious activity had been sent or received, and agencies had mixed views about the usefulness of these notifications. Further, DHS did not always solicit—and agencies did not always provide—feedback on them. In addition, while DHS has developed metrics for measuring the performance of NCPS, they do not gauge the quality, accuracy, or effectiveness of the system's intrusion detection and prevention capabilities. As a result, DHS is unable to describe the value provided by NCPS. Regarding future stages of the system, DHS has identified needs for selected capabilities. However, it had not defined requirements for two capabilities: to detect (1) malware on customer agency internal networks or (2) threats entering and exiting cloud service providers. DHS also has not considered specific vulnerability information for agency information systems in making risk-based decisions about future intrusion prevention capabilities. Federal agencies have adopted NCPS to varying degrees. The 23 agencies required to implement the intrusion detection capabilities had routed some traffic to NCPS intrusion detection sensors. However, only 5 of the 23 agencies were receiving intrusion prevention services, but DHS was working to overcome policy and implementation challenges. Further, agencies have not taken all the technical steps needed to implement the system, such as ensuring that all network traffic is being routed through NCPS sensors. This occurred in part because DHS has not provided network routing guidance to agencies. As a result, DHS has limited assurance regarding the effectiveness of the system. GAO recommends that DHS take nine actions to enhance NCPS's capabilities for meeting its objectives, better define requirements for future capabilities, and develop network routing guidance. DHS concurred with GAO's recommendations. |
EAS serves as the nation’s primary alerting system. It provides the President the capability to issue alerts and communicate to the public in response to emergencies. It was built on a structure conceived in the 1950s when over-the-air broadcasting was the best-available technology for widely disseminating emergency alerts. EAS has been upgraded numerous times since then, including in 2005 to include digital broadcast television as well as satellite radio and television. EAS was further expanded to include Internet-protocol-based television in 2007. FEMA, in partnership with FCC and NOAA, is responsible for operating and maintaining EAS at the federal level. NOAA’s National Weather Service and state and local alerting authorities, in conjunction with local radio and television stations, can also use EAS to disseminate emergency messages, including weather warnings, America’s Missing: Broadcast Emergency Response (AMBER) Alerts, and other public emergency communications, targeted to specific regional and local areas and independent from a presidential alert. PEP stations are usually private or commercial radio stations, but FEMA also designated some satellite providers as PEP stations, such as SiriusXM Satellite and National Public Radio’s Satellite System News Advisory Channel. the country to radio and television stations that rebroadcast the audio and visual message to other broadcast stations, cable systems, and other EAS participants until all participants have been alerted. This retransmission of alerts from EAS participant to EAS participant is commonly referred to as a “daisy chain” distribution system. While FEMA is responsible for administering EAS at the national-level, FCC adopts, administers, and enforces rules governing EAS and the EAS participants. FCC rules require EAS participants to install FCC-certified equipment and transmit all national-level alerts; EAS participants can also voluntarily transmit alerts generated by the National Weather Service or state and local alerting authorities. EAS participants, through their State Emergency Communications Committee, may maintain state EAS plans that contain procedures for the distribution of national-level alerts as well as other voluntary alerts generated by state and local alerting authorities and the National Weather Service.relay network of each state, including the monitoring assignments of EAS participants for all national-level and other alerts. State EAS plans describe the EAS On November 9, 2011, FEMA conducted the first-ever nationwide test of the national-level EAS in response to our prior reports noting the lack of EAS testing. FEMA conducted the test in conjunction with FCC. In conducting the test, FEMA initiated a national-level alert to be distributed through the EAS daisy chain to EAS participants, which include about 26,000 broadcasters, cable operators, and other EAS participants. To obtain information on the results of the test, FCC directed all EAS participants to report either electronically or via paper report by December 27, 2011, on whether they received and retransmitted the alert. Although December 27, 2011, was the deadline, FCC continued to accept paper reports from EAS participants past the deadline. In addition to EAS, state and local alerting authorities may own and operate other warning systems, such as emergency telephone notification systems, sirens, and electronic highway billboards, to provide public emergency information. Additionally, NOAA provides alerts through the NOAA Weather Radio All Hazards system, which is a network of radio stations broadcasting continuous weather information, including warnings, watches, and forecasts directly from the nearest National Weather Service office. In 2004, FEMA initiated IPAWS to integrate EAS and other public-alerting systems into a larger, more comprehensive public-alerting system. In June 2006, the President issued Executive Order No. 13407, entitled Public Alert and Warning System, adopting a policy that the United States have a comprehensive, integrated alerting system. The order directs the Secretary of Homeland Security to “ensure an orderly and effective transition” from current capabilities to a more coordinated and integrated system and details the responsibilities of the Secretary in meeting the President’s directive. As shown in table 1, the executive order established 10 responsibilities for the Secretary of Homeland Security. It is FEMA’s intention that IPAWS be the programmatic mechanism to carry out the executive order. In addition, in 2006, the Warning, Alert, Response Network Act (WARN Act) was enacted, which required FCC to adopt relevant technical standards, protocols, procedures, and other technical requirements to enable commercial mobile service providers (e.g., wireless providers) to issue emergency alerts. The act required FCC to establish an advisory panel called the Commercial Mobile Service Alert Advisory Committee to recommend technical specifications and protocols to govern wireless service providers participation in emergency alerting. In 2008, following public notice and opportunity for public comment as required by the Administrative Procedure Act, FCC adopted many of the committee’s recommendations for wireless providers to transmit alerts and began developing the Commercial Mobile Alert System (CMAS), in conjunction with FEMA. We previously reported several factors that limited EAS effectiveness and delayed IPAWS implementation. For example, in 2009, we reported that a lack of redundancy and testing and gaps in coverage, including capabilities to reach individuals with disabilities and non-English speakers, significantly limited EAS reliability and efficiency. We also reported in 2009 that IPAWS program implementation had stalled, as state and local governments were forging ahead with their own alerting systems. We made several recommendations to FEMA to improve program management and enhance transparency about the progress toward achieving an integrated public-alerting system. FEMA implemented all our recommendations, including periodically reporting on the status of implementing IPAWS to congressional committees and subcommittees. Since we reported on these issues in 2009, FEMA has taken actions to improve IPAWS capabilities. In particular, FEMA implemented a federal alert aggregator in 2010, called the IPAWS Open Platform for Emergency Networks, which has increased alerting capabilities for authorities at the federal, state, and local level. The alert aggregator is capable of receiving and authenticating alerts from public-alerting authorities and routing them to various public-alerting systems. As of January 2013, 93 public-alerting authorities, including those in at least 35 states, have gone through the necessary authentication steps with FEMA to use IPAWS and an additional 110 alerting authorities have applications in process.compatible software to compose and transmit alerts via the Internet to the Authorized public-alerting authorities may use IPAWS- alert aggregator using a common standard, called the Common Alerting Protocol (CAP). According to FEMA, once the alert aggregator verifies the credentials of the message, an alert may be distributed to the public through multiple alerting systems, which make up the components of IPAWS, as follows: EAS. As of January 2012, public-alerting authorities can disseminate CAP-formatted EAS alerts through the alert aggregator to television and radio stations. As of June 30, 2012, FCC required EAS participants (i.e., radio and television broadcasters, cable operators) to have in place CAP-compatible equipment and monitor the IPAWS EAS feed so they can retrieve and retransmit Internet-based EAS alerts. State and local alerting authorities’ use of IPAWS to send EAS alerts is voluntary and as of January 2013, no public-alerting authorities had used IPAWS to send an EAS alert. However, according to FEMA, state and local alerting authorities had sent 81 EAS test messages via the alert aggregator between January 2012 and January 2013. All-Hazards Emergency Message Collection System (HazCollect). NOAA’s HazCollect system connected to IPAWS in September 2012, and enables federal, state, and local alerting authorities to send non- weather emergency messages through IPAWS to the National Weather Service’s alerting systems, including NOAA Weather Radio’s nationwide network of radio stations. Examples of non-weather emergency message events can include wildfires, hazardous materials releases, terrorist incidents, AMBER alerts, and public health emergencies. According to FEMA, EAS participants generally monitor the NOAA Weather Radio directly for emergency alerts. As a result, IPAWS with HazCollect provides an alternate means for EAS participants to receive non-weather alerts from local alerting authorities, increasing the number of alerting channels and enhancing the likelihood that the public will receive timely alerts. According to FEMA, 22 NOAA Weather Radio messages had been sent via the alert aggregator as of January 2013. CMAS. Starting in April 2012, public-alerting authorities can use IPAWS to transmit alerts via the CMAS interface to disseminate mobile alerts, which are geo-targeted, text-like messages to mobile phones. These alerts are limited to 90 characters and emit a unique ring tone and vibration cadence, which is intended to, among other things, improve capabilities for notifying individuals with disabilities during an emergency. This new capability is designed to relay presidential (or national-level), AMBER, and imminent threat alerts to mobile phones using cell technology that is not subject to the congestion typically experienced on wireless networks during times of emergency. Most imminent threat alerts are issued by the National Weather Service, which began sending severe weather-related alerts to all regions of the country in June 2012. According to FEMA, as of January 2013, the National Weather Service had sent 2,667 weather alerts via CMAS. An additional 3 imminent threat alerts had been sent from one state related to Hurricane Sandy and 17 AMBER alerts had been sent from the National Center for Missing and Exploited Children. While CMAS became operational in April 2012, participation by wireless carriers is optional under the WARN Act. Nevertheless, according to CTIA—The Wireless Association, all of the major wireless carriers have agreed to participate. Some carriers may still be rolling out CMAS capabilities and not all cell phones are yet capable of receiving alerts, according to CTIA. Some state and local alerting authorities we contacted raised concerns about the degree of granularity for geo-targeting these alerts, which we discuss later in this report. Internet services. As of September 2012, Internet web services (e.g., Google Public Alerts) and software application developers can retrieve and redistribute IPAWS alerts to the public through their own services, such as websites, mobile phone applications, email, and text messaging. To do so, an alert redistribution service must complete a memorandum of agreement with FEMA, which then grants them access to the IPAWS Public Alerts Feed from the alert aggregator. State and local alerting systems. According to FEMA, existing state or locally owned and operated public-alerting systems—such as sirens and emergency telephone notification systems—may also be configured to receive alerts from IPAWS. FEMA views the new capabilities for public-alerting authorities to distribute CAP-formatted messages through the federal alert aggregator as an added capability, not a replacement, to the traditional national-level alert (i.e., EAS daisy chain relay distribution system). As a result, FEMA officials said they anticipate maintaining both systems into the foreseeable future as parallel alerting systems, as shown in figure 2. FEMA officials also told us that discussions with the White House are ongoing to determine use of IPAWS during a presidential alert; however, at the time of our report, FEMA officials said a national-level alert would not be disseminated through the federal alert aggregator. In addition to creating the alert aggregator, FEMA has taken other actions to implement the IPAWS program and address directives in Executive Order No. 13407. Specific examples include: Expanded and modernized PEP stations. To increase direct coverage of a presidential alert and address executive order directives to augment infrastructure for the public alert and warning system, FEMA has expanded the number of PEP stations from 34 in 2009 (directly covering about 67 percent of the American population) to 65 in 2012 (directly covering about 85 percent of the American population), according to FEMA officials. FEMA plans to further expand and modernize this network, with the goal of having a total of 77 PEP stations operational by fall 2013, providing direct coverage to over 90 percent of the American population.they have also added satellite connectivity in 50 PEP stations, with the goal of a fully operational, dedicated PEP satellite network to all 77 stations by fall 2013. According to FEMA officials, once operational, this network will be the primary connection between FEMA and the PEP stations in the event of a presidential alert; the traditional telephone-based distribution network will provide a redundant backup connection. Adopted CAP standard. To address directives in the executive order that DHS develop alert standards and protocols, FEMA formally adopted CAP in September 2010. CAP can be used as a single input to activate multiple warning systems, and is capable of geographic targeting and multilingual messaging. According to a survey FEMA conducted of more than 3,300 public-alerting authorities in the United States from January 2010 through December 2011, 64 percent of the sites responding used CAP and had IPAWS-compatible Products in place at the time of the survey. Most public-alerting authorities we contacted are moving toward adoption of CAP; however, some are still in the process of implementing new software to interface with IPAWS or are waiting for vendors to provide upgrades to their existing systems. In addition, representatives from the broadcast industry told us, based on experience, that the vast majority of broadcasters are able to receive CAP-formatted alerts, as required by FCC rules. Developed IPAWS training and webinars. Executive Order No. 13407 directs DHS to conduct training for the public alert and warning system. To address this directive, FEMA developed an independent training course for alerting authorities on IPAWS capabilities, which has been available online since December 2011. The goal of the course is to provide public-alerting authorities with increased awareness of the benefits of using IPAWS for public warnings; skills to draft more appropriate, effective, and accessible warning messages; and best practices in the effective use of CAP to reach all members of their communities. In addition, the IPAWS program office conducts monthly webinars for developers and alerting practitioners. Conducted outreach to partners. Since 2009, the IPAWS program office has made efforts to improve communication and outreach to stakeholders at all levels, according to FEMA officials. Executive Order No. 13407 directs FEMA to consult, coordinate, and cooperate with the private sector, as well as provide public education on IPAWS. Some government and private stakeholders told us that FEMA’s communication and coordination efforts have improved significantly since 2009, although improvements could still be made, especially in educating the public, as discussed below. According to FEMA officials, the IPAWS program office works to engage federal entities; state, local, tribal, and territorial alerting authorities; private sector industry; non-profit and advocacy groups; and the American people through working groups and roundtables, conferences, demonstrations, trainings and webinars, Congressional briefings, and the IPAWS Web site, among other mechanisms. For a complete list of actions FEMA has taken to address Executive Order No. 13407, see appendix II. Although FEMA has taken important steps to advance an integrated alerting system, barriers exist that may impede IPAWS implementation at the state and local level. Specifically, public-alerting authorities we contacted, as well as representatives from national trade industry groups, identified five main barriers at the state and local level. These barriers include (1) insufficient guidance on how states should fully implement IPAWS; (2) inability of state and local alerting authorities to test all IPAWS components; (3) CMAS geo-targeting and character limitations; (4) inadequate public outreach on IPAWS capabilities; and (5) limited resources at the federal, state, and local level to fully implement IPAWS. Insufficient guidance to fully implement IPAWS. While most state and local alerting authorities we contacted, including representatives from the National Emergency Management Association, said they are moving toward implementing IPAWS, some are reluctant to fully implement the system, citing a need for more information and additional guidance from FEMA. Specifically, while current IPAWS training exists to instruct public-alerting authorities on, among other things, how to draft an appropriate IPAWS alert, state and local alerting authorities we contacted said additional guidance is needed on integrating and operating IPAWS with existing state and local public-alerting systems in their states. For example, officials in one state said that while they are prepared to use IPAWS, they have not yet integrated their state and local alerting systems with IPAWS, citing a need for additional guidance from FEMA and communication within the state to determine what systems and policies should be put in place to integrate IPAWS with public-alerting systems in the state’s 128 counties and cities. Although Executive Order No. 13407 directs DHS to ensure interoperability and the delivery of coordinated public messages through multiple communication pathways, we found that none of our selected states had yet integrated their alerting systems with IPAWS for state or local level alerting, although according to FEMA, the alerting authorities had gone through the necessary steps to become authenticated IPAWS originators. Since IPAWS is still in the early stages of its deployment, officials said that there are no examples of how to effectively implement IPAWS at the state and local level. In commenting on a draft of this report, FEMA officials noted that they are involved in efforts to conduct case studies with public-alerting authorities in Nebraska and Nevada to provide examples of effectively implementing IPAWS at the state level. FEMA officials said they are working with state and local alerting authorities as well as system developers and vendors, to address some notable challenges related to implementing IPAWS, including how states can manage IPAWS capabilities within their respective states. Nevertheless, in the absence of additional FEMA guidance, some states are reluctant to fully implement IPAWS, a reluctance that decreases the capability for an integrated, interoperable, and nationwide alerting system. CMAS enables government officials to target emergency alerts to specific geographic areas through cell towers (e.g., lower Manhattan), which pushes the information to dedicated receivers in CMAS-enabled mobile devices. and local alerting authorities we spoke with raised concerns about the possibility of over alerting the public with mobile alerts since the alerts may not geo-target the specific area affected. The 90-character message limitations of these alerts were also raised as a challenge by FEMA and other alerting authorities to sending out clear and accurate alerts, as alerts may not contain enough information to be useful. For example, according to officials in one state, the National Weather Service issued a flash flood warning via CMAS that was distributed throughout a large county, which is roughly the size of the state of Connecticut, when only one small area of the county was affected. According to state officials, some citizens were confused when they received this alert as they were not located in the affected area, and there was very little information contained in the 90-character alert to clarify the specific area affected. In addition, an evacuation notice accompanied the flash flood warning, and the local emergency management authority was unprepared when citizens called them for additional information. Officials stated that some citizens might ignore or opt out of future mobile alerts if they received previous alerts that were not applicable to them. The Commercial Mobile Service Alert Advisory Committee, which recommended technical standards and protocols for CMAS in 2007, recommended reviewing and updating its recommendations periodically based on advances in technology and experiences in deployment, especially related to geo- targeting. As previously mentioned, FCC plans to have a federal advisory committee review the CMAS rules, including those related to geo-targeting and character limits. Technological advancements and experiences in using the system since 2008 may warrant a review on a more specific level of geo-targeting and expanded character limits for mobile alerts than was previously possible. Such changes to CMAS could make state and local authorities more likely to use these alerts and the public less likely to opt out of the service. Insufficient public outreach. According to federal, state, and local officials we contacted, the public is generally unaware of IPAWS capabilities, especially alerts sent to mobile phones. Although FEMA officials told us that a training course to educate the public is under development, FEMA has conducted limited outreach to date to inform the general public about IPAWS alerts and capabilities beyond information on the FEMA website. Executive Order No. 13407 directs DHS to provide public education on using, accessing, and responding to the public alert and warning system. Because of limited public outreach, some state and local alerting authorities expressed concern that the public may ignore or opt out of receiving IPAWS alerts, even though these alerts may provide important, life-saving information. While FEMA has made efforts to improve outreach efforts with IPAWS stakeholders since 2009, FEMA officials said they have limited resources and experience in educating the general public on IPAWS. In previous work, we identified key practices for planning a consumer education campaign, including (1) defining goals and objectives; (2) analyzing the situation; (3) identifying stakeholders; (4) identifying resources; (5) researching target audiences; (6) developing consistent, clear messages; (7) identifying credible messenger(s); (8) designing media mix; and (9) establishing metrics to measure success. Public outreach that includes these key practices could help ensure that the public is better informed about IPAWS capabilities. Limited resources to implement IPAWS. While there is no charge to send messages through IPAWS, there are underlying costs to purchasing the software and equipment needed to integrate with IPAWS, costs that state and local public alerting authorities said can act as a barrier to implementation in difficult financial times. According to the FEMA survey of public alerting authorities, decreased revenues and a lack of grant funding at all levels of government were reported as primary reasons for authorities’ inability to purchase and sustain alerting systems. In addition, the FEMA survey found that while most state-level alerting authorities reported having full-time staff, many local authorities might only have part-time or volunteer staff and very limited budgets. In addition to these barriers, there are some long-standing weaknesses that continue to limit the effectiveness of the national-level EAS since we last reported on this topic in 2009, including a lack of redundancy in how national-level EAS messages are disseminated to the public. FEMA is making progress in increasing redundancy between the FEMA operations center and designated PEP stations through its deployment of a PEP satellite network. However, FEMA continues to rely solely on radio and television broadcast for a national-level EAS alert because the national- level EAS is not currently integrated with IPAWS capabilities. As a result, FEMA lacks alternative means of reaching EAS participants should a point in the daisy chain distribution system fail. Moreover, large portions of the population would likely not be reached by a national-level alert— specifically all those who are not watching television or listening to the radio at the time of the alert. Executive Order No. 13407 directs DHS to ensure presidential alerting capabilities under all conditions and enable delivery of coordinated messages to the American people through as many communication pathways as practicable. In addition, while Executive Order No. 13407 specifies that the public-alerting system should provide warnings to non-English speakers and individuals with disabilities, it remains difficult for a national-level alert to reach these distinct segments of the population. While the President has never initiated a national-level alert, according to FEMA, such an alert would be provided in English and only through radio and television broadcasts, which may not be accessible to individuals with disabilities.example, according to the National Council on Disability, most disaster warnings broadcast via radio and television may not be accessible to For people with hearing or vision disabilities.CAP-formatted messages to specialized alerting devices for individuals with disabilities and in non-English languages, could help address some of these limitations if it were integrated with the national-level EAS. Our analysis of FCC data found that approximately 82 percent of reporting broadcasters (radio and television) and cable operators received the November 2011 nationwide test alert. Although FEMA has been working to implement IPAWS, the November 2011 nationwide EAS test used the traditional national-level alert system (i.e., EAS daisy-chain relay distribution system) and did not include new IPAWS capabilities.Broadcasters’ and cable operators’ reception of the test alert varied widely by state. As shown in figure 3, the reception of the alert ranged from approximately 6 percent (in Oregon) to 100 percent (in Delaware) among the states. FCC, FEMA, broadcasters, and state alerting authorities in Oregon attributed the low reception rate to the absence of a PEP station in the state at the time of the test. Without a PEP station, broadcasters and cable operators in Oregon were directed to monitor a Portland-based public radio station, which reported receiving poor audio quality of the alert from its designated monitoring source—the National Public Radio satellite network. Once EAS participants received the national-level test alert, they were required to retransmit the audio signal to other EAS participants, as designated in state EAS plans, for the daisy chain distribution system to work. Our analysis of FCC data found that 61 percent of reporting broadcasters and cable operators were able to retransmit the alert to stations that were designated to monitor the retransmitting station. The retransmission rate of the test alert by broadcasters and cable operators also varied widely among the states ranging from approximately 4 percent (in Oregon) to 88 percent (in New Jersey). FCC does not know the potential percentage of the American people who did not receive the alert because, officials noted, the nationwide EAS test was designed to assess EAS performance rather than to determine the percentage of public receipt of the test. Therefore, it is unknown what percentage of the American people failed to receive the test. Key reasons for EAS participants’ failure to receive and retransmit the national-level test alert included (1) PEP station reception failure, (2) poor audio quality, (3) shortened test length, (4) outdated monitoring assignments, and (5) equipment failure. PEP station reception failure. FEMA reported that 3 of the 63 PEP stations were unable to receive and retransmit the alert due to technical reasons. These PEP stations were located in New Mexico, Alabama, and American Samoa. Failures at those stations significantly contributed to low national-level alert reception rates in those states and that territory. In particular, our analysis of FCC data found that nearly 90 percent of broadcasters in New Mexico, almost 70 percent of broadcasters in Alabama, and 100 percent of broadcasters in American Samoa failed to receive the national-level alert. According to FEMA, connectivity issues with the specialized EAS equipment used at the PEP stations were the reasons for the failure. As previously mentioned, FEMA plans to modernize PEP stations with a dedicated satellite network, and officials expect this dedicated network to provide more reliable connection to the PEP stations when fully operational by fall 2013. Poor audio quality. FCC also reported that poor audio quality of the national-level alert signal resulted in problems ranging from some broadcasters’ receiving a garbled and degraded audio message to others’ receiving a duplicate alert tone that caused equipment to malfunction. These audio problems resulted in some stations’ being unable to retransmit the test alert. According to FEMA, the reported poor audio quality was due, in part, to a feedback loop that occurred when equipment at a single PEP station rebroadcasted the original message back to FEMA. This audio message was then transmitted by FEMA over the original audio message, degrading the audio. Therefore, fewer stations were able to receive, and thus retransmit, the alert to their designated station(s). EAS participants we met with consistently stated that the poor audio quality during the nationwide EAS test was a significant problem. For example, state and local alerting authorities, broadcaster associations, and individual broadcast stations we contacted stated that connectivity and audio problems occurred during the nationwide test. Officials from one state broadcasters association said that broadcasters in their state only received 10 seconds of the national-level alert signal with only five or six words of the message and then 20 seconds of dead air for the remainder of the test. They also stated that problems with the audio resulted in the alerts not being retransmitted to other stations in their state. Shortened test length. The nationwide EAS test was originally scheduled to last 3 minutes, but was shortened to 30 seconds. According to an industry trade association, the announcement to change the test length came about 2 weeks prior to the test. Because of the shortened test length, some broadcasters and cable operators were unable to receive or retransmit the national-level alert. According to FEMA, the test was shortened to mitigate concerns from the cable industry that the public who could not hear the audio portion of the test would be unable to tell if the alert was a test or a real alert solely from the television screen display. More specifically, FCC instructed broadcasters to use an on-screen slide just before the test to announce that the following message would be a test and not an actual alert. However, according to officials from an industry trade association, some EAS participants, namely some cable operators, were unable to provide this background screen during the nationwide test. In these cases, since FCC chose to use a live alert code to resemble an actual nationwide test, there was no visual cue that a test was taking place. There was concern that this could adversely affect some segments of the public, especially individuals who were unable to hear the audio portion indicating a test was taking place. According to representatives from industry trade associations, use of a test code for future nationwide EAS tests could help ensure that all segments of the population understand that a nationwide test, rather than an actual national emergency, is taking place. Outdated monitoring assignments. FCC noted that some state EAS plans that designate the monitoring assignments are outdated and its review of the EAS test results revealed some confusion among some EAS participants of monitoring assignments. We found that as of February 2013, out of 33 state and District of Columbia EAS plans available on FCC’s website, 16 state plans were dated 2009 or earlier, with 3 of these plans dated in the 1990s. Additionally, 18 state plans were not available on FCC’s website with the link to one website leading to information completely unrelated to the state. FEMA reported that if monitoring assignments in the state EAS plans are not followed or the state EAS plans are not up-to-date, EAS participants may not receive and relay the messages. According to FEMA, several EAS participants reported not being able to receive the national-level alert from their assigned sources, and as a result, they were unable to relay the alert. Equipment failures. Because of specific equipment failures, some broadcasters could not receive or retransmit the national-level alert. FCC reported that approximately 5 percent of EAS participants responding to its data collection effort reported that hardware, equipment, or configuration problems precluded them from receiving the national-level alert. At the time of our review, FCC and FEMA had taken limited steps to address problems identified in the nationwide EAS test. According to FEMA officials, the poor audio quality that was experienced during the test is being addressed, in part, with the deployment of a dedicated PEP satellite network, but the remaining issues have yet to be resolved. FEMA officials told us that it will take a combination of FCC rulemaking, developing best practices, and correcting technical issues to address the problems that were identified during the nationwide test, but implementing some of these actions could likely take years. According to FCC officials, a working group, in coordination with FEMA, has been examining these issues, but neither agency could identify progress made by the group more than a year after the test. In commenting on a draft of this report, FCC told us it issued its final report on the results of the nationwide EAS test on April 12, 2013. According to FCC officials, one of the reasons for the delay in issuing a final report on the test result was their effort to collect more data from EAS participants. FCC continued to accept paper reports on the test results from EAS participants for about a year after the test was conducted, despite the December 27, 2011 deadline for electronically submitting the test results. EAS participants and state and local alerting authorities we contacted said that they were not aware of FCC taking any actions to address identified issues, and as a result, their ability to make improvements and prepare for future tests is limited. Concerning future tests, FCC rules require a nationwide EAS test to be conducted periodically, but it is uncertain when the next test will occur. FEMA officials told us that they are continuing to work with FCC in determining corrective actions from the test results and will not hold another test until corrective actions are complete. As we have previously reported, regular nationwide EAS testing is essential to ensure that the system will work as intended during an emergency. FCC recognizes that outdated state EAS plans contributed to some of the reception and retransmission problems during the EAS test, and is being more proactive in requesting states to submit updated plans. FCC officials stated that updating state EAS plans would be valuable to ensure that the monitoring assignments for the broadcast stations remain accurate when a national-level alert is activated. However, as of October 2012, FCC has received 7 of 50 updated state EAS plans. Officials stated that they would continue to ask state emergency communications committees to submit updated EAS plans to review, but that FCC has no authority to require the filing of EAS plans. As a result, FCC is unable to fully verify that states are keeping EAS monitoring assignments up to date. In addition, some EAS participants we spoke with are waiting for more guidance from FCC, including anticipated changes in rules governing EAS. For example, FCC officials told us that they plan to issue a notice of proposed rulemaking sometime in 2013 seeking comment on issues identified from the nationwide EAS test. EAS participants and state and local alerting authorities we spoke with stated that there are several actions that FCC, in conjunction with FEMA, could take that would assist EAS participants in preparing, conducting, and reporting on future nationwide EAS test alerts. These actions include (1) issuing an after-action plan to help identify and address problems that occurred during the test, (2) conducting regular and frequent testing of EAS to ensure the system works as intended, and (3) providing guidance to update state EAS plans to incorporate IPAWS (e.g., guidance could be EAS plan templates, best practices, good examples). FEMA has made progress since 2009 in developing a more comprehensive, integrated nationwide public-alerting system. FEMA has improved the capabilities of IPAWS by bringing the IPAWS alert aggregator online and integrating it with multiple alerting systems, including HazCollect and CMAS. However, for IPAWS to become fully operational, several areas of concern need to be addressed. In particular, additional guidance for state and local alerting authorities on specific steps to integrate and test their public-alerting systems with IPAWS components would help to provide assurance on the interoperability and effectiveness of IPAWS and facilitate its implementation. Furthermore, according to public-alerting authorities we contacted, without additional guidance on IPAWS implementation and consideration of CMAS rules, state and local alerting authorities we contacted were reluctant to fully integrate their systems with IPAWS and rely on IPAWS as a comprehensive public-alerting system. In addition, a concerted effort to educate state and local governments, the private sector, and the American people on the functions of the public-alerting system is necessary to inform them on how to access, use, and respond to emergency alert messages. Using key practices for conducting a public education campaign—such as defining goals and objectives, identifying stakeholders and resources, and developing clear and consistent messages—could enable FEMA, which has limited experience educating the general public on IPAWS, to more effectively and efficiently inform the American people on how to access and respond to potentially life-saving emergency alerts. FEMA has also expanded the number of PEP stations and enhanced satellite connectivity to improve direct coverage and dependability of the national-level EAS. However, as long as the national-level EAS remains independent from IPAWS, portions of the population, including individuals with disabilities and non-English speakers, will be less likely to receive or fully understand presidential alerts disseminated only through the EAS daisy chain. If integrated, CMAS, in particular, is capable of providing alerts in different formats, including emitting unique ring tone and vibration cadences for those who have hearing or visual impairments, which would increase the likelihood that individuals with disabilities could be informed that a national-level alert is being issued. Furthermore, integrating EAS into IPAWS would provide system redundancy for national-level alerts. FEMA and FCC held the first-ever test of the national-level EAS in November 2011, an important step. However, the results of the nationwide EAS test—which a number of EAS participants could not effectively receive or retransmit—show that the reliability of the traditional EAS system remains questionable. At the time of our review, we found that FEMA and FCC had taken limited steps to address problems identified by EAS participants. In addition, some state EAS plans and monitoring assignments are outdated, in part, because state emergency communications committees are waiting for more guidance from FCC, including changes in rules governing EAS. Although states are not required to update and submit state EAS plans, FCC could help facilitate the process by providing additional guidance. Finally, while FCC rules call for periodic nationwide EAS testing, FCC and FEMA currently have not scheduled another nationwide test. Without ongoing, regular nationwide testing of the relay distribution system, there is no assurance the EAS would work should the President need to activate it to communicate with the American people. To ensure that IPAWS is fully functional and capable of distributing alerts through multiple pathways as intended, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to take the following four actions: In conjunction with FCC, establish guidance (e.g., procedures, best practices) that will assist participating state and local alerting authorities to fully implement and test IPAWS components and ensure integration and interoperability. In conjunction with FCC and NOAA, conduct coordinated outreach to educate the American public on IPAWS capabilities, especially CMAS. Develop a plan to disseminate a national-level alert via IPAWS to increase redundancy and communicate presidential alerts through multiple pathways. In conjunction with FCC, develop and implement a strategy for regularly testing the national-level EAS, including examining the need for a national test code, developing milestones and time frames, improving data collection efforts, and reporting on after-action plans. To ensure that CMAS is effectively used and that the EAS relay distribution network is capable of reliably communicating national-level alerts, we recommend that the Chairman of FCC, in conjunction with FEMA, take the following two actions: Review and update rules governing CMAS, including those related to geo-targeting, character limitations, and testing procedures. Provide states with additional guidance (e.g., templates of EAS plan) to facilitate completion of updated state EAS plans that include IPAWS-compatible equipment. We provided a draft of this report to DHS, FCC, and the Department of Commerce for their review and comment. In response, DHS concurred with all of the report’s recommendations to improve IPAWS capabilities. In its written comments, DHS provided examples of actions FEMA will undertake to address the recommendations. For example, DHS noted that FEMA intends to create toolkits for state and local alerting authorities that will include alerting and governance best practices, technology requirements, and operation and usage information on IPAWS. Regarding efforts to improve nationwide EAS testing, DHS indicated that FEMA plans to work with federal partners, including FCC, to create a national test code, develop milestones and timeframes for future testing, improve data collection efforts, and report on after-action plans. See appendix III for written comments from DHS. In commenting on the draft report, FCC did not state whether it agreed or disagreed with the report’s recommendations. FCC noted that it issued a final report on the results of the nationwide EAS test on April 12, 2013, and we believe the report includes potential actions that could address our recommendations in the future. For example, the April 2013 report includes recommendations for FCC to commence a rulemaking proceeding on state EAS plans and to encourage the groups that typically develop state EAS plans to ensure that the plans contain accurate EAS monitoring assignments. Other recommendations in FCC’s April 2013 report include commencing a rulemaking proceeding to examine equipment-performance issues during activation of a test, and developing a new Nationwide EAS Test Reporting System database to improve filing electronic data from EAS participants. FCC stated that it will conduct a review of CMAS rules, as we recommended in this report, and also noted that it will work with FEMA to develop a strategy for regular testing of EAS. See appendix IV for written comments from FCC. The Department of Commerce provided technical comments from its component agency NOAA, and we incorporated them as appropriated. In the comments, NOAA stated that it believes our report does an accurate job in assessing the nationwide EAS test results and the current state of IPAWS. With respect to our recommendation on conducting outreach, NOAA believes the outreach should be conducted in conjunction with FCC and NOAA, and we made the suggested revision. In addition to written comments, DHS and FCC provided technical comments on the draft report, which we incorporated as appropriate. As agreed upon with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Homeland Security, and the Chairman of FCC. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report provides information on federal efforts to integrate various public-alerting systems and modernize the Emergency Alert System (EAS). Specifically, the report examines (1) how the capabilities of the Integrated Public Alert and Warning System (IPAWS) have changed since 2009 and what barriers, if any, are affecting its implementation and (2) the results of the nationwide EAS test and federal efforts under way to address identified weaknesses. To obtain information on both objectives of this report, we interviewed officials from the Federal Emergency Management Agency (FEMA), Federal Communications Commission (FCC), Department of Homeland Security, and National Oceanic and Atmospheric Administration (NOAA). We spoke with representatives from national trade industry groups, including the National Emergency Management Association, National Association of Broadcasters, National Cable and Telecommunications Association, CTIA-The Wireless Association, and National Alliance of State Broadcasters Associations, to obtain stakeholders’ perspective on the results of the first nationwide EAS test and federal efforts to implement IPAWS. We also spoke with representatives from the satellite industry (DIRECTV), an EAS equipment manufacturer (Monroe Electronics), and the National Council on Disability to gather their views on IPAWS implementation and the nationwide EAS test. We conducted interviews with selected state and local alerting authorities, state emergency-communication-committee chairs, state broadcasting associations, and selected local broadcasters. We nonstatistically selected a sample of six locations—California, Kentucky, Oklahoma, Oregon, Wisconsin, and the District of Columbia—to obtain information from state and local officials on any barriers to implementing IPAWS and potential remedies for addressing any identified barriers, as well as to determine any problems associated with the nationwide EAS test. We selected these states and locality because some had (1) other public- alerting systems, in addition to the EAS; (2) alerting systems that are capable of providing alerts for individuals with disabilities and limited English; and (3) experienced a breakdown of test alert dissemination during the nationwide EAS test. We also selected these states and localities because some had been authenticated to be an IPAWS-alerting authority and they were geographically diverse. To obtain a regional perspective on implementing IPAWS and testing the EAS, we also spoke with officials from FEMA regional offices. Because we conducted targeted interviews, our results are not generalizable to all states and localities. Table 1 provides more detailed information on the state and localities we selected and the entities we interviewed. To obtain information on how the capabilities of IPAWS have changed since 2009 and what barriers, if any, affect its implementation, we also reviewed and analyzed agency documents and literature since 2009. We reviewed documents on IPAWS program planning, including the 2010 IPAWS program management plan, and assessed actions that have been taken to determine if systems and standards are operational. We also attended a number of IPAWS webinars to obtain training and information that are provided to public-alerting authorities. We reviewed FEMA’s IPAWS Inventory and Evaluation Assessment Report, which surveyed 3,314 state, territorial, tribal, and local emergency management agencies to analyze gaps between existing public-alerting capabilities and IPAWS and includes recommendations for IPAWS integration. The survey was conducted mostly by telephone with structured questionnaires over a 2- year period from January 2010 through December 2011 and specific procedures were followed to identify emergency management personnel for the sites at each level. We assessed the survey’s methodology and determined that the estimates from it that we cite are sufficiently valid for use in our report. Specifically, we assessed the survey methodology against the Office of Management and Budget’s Standards and Guidelines for Statistical Surveys. We did not otherwise verify, however, the findings and conclusions from the report. To obtain information on the results of the nationwide EAS test and federal efforts to address any identified weaknesses, we reviewed and analyzed agency data and documents. Specifically, we examined FCC’s and FEMA’s preliminary reports on the nationwide EAS test results; FCC orders and rules on EAS; FCC’s website on the nationwide EAS test; FEMA’s EAS Best Practices Guide; and briefing documents from FEMA and NOAA. We analyzed FCC’s data from EAS participants to determine the percentage of radio and television broadcasters and cable operators that received and retransmitted the national-level alert on a statewide basis. We analyzed FCC’s data for 49 states; we did not include Alaska since it was excused from the nationwide test because of severe weather conditions. To determine the reliability of the data used in this report, we reviewed relevant documentation and interviewed agency officials about their processes for reviewing the data and ensuring their accuracy. We also ensured that FCC data were sufficiently reliable for our review. We reviewed and analyzed state EAS plans that were posted on FCC’s website to determine if the state’s EAS plans were current. We interviewed FCC officials to confirm that the information on FCC’s website is current. We conducted this performance audit from June 2012 through April 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: FEMA’s Progress Addressing Responsibilities of the Secretary of Homeland Security under Executive Order No. 13407 Status/progress/timeline Issued the IPAWS Inventory and Evaluation Assessment Report in January 2012. This report surveyed and assessed public-alerting authorities in the United States between 2009 and 2011. Formally adopted the Commercial Mobile Alerting System (CMAS) Specification in December 2009. Formally adopted the Common Alerting Protocol (CAP) standard for IPAWS in September 2010. Implemented CMAS. Wireless carriers began issuing geo-targeted CMAS alerts in April 2012; NOAA started sending geo-targeted CMAS messages in June 2012. Hosted biannual roundtables for industry experts, federal agencies, and advocacy organizations representing Americans with access and functional needs to discuss emergency alerting. Shared lessons learned and best practices for communicating to Americans with access and functional needs through the EAS to FCC. For example, encouraged FCC to consider EAS rule changes or clarifications for broadcasters with regard to: (1) display size, color, background contrast, and speed of text crawl during EAS alert and (2) use of a test code for future nationwide testing of EAS. Expanded the number of primary entry point (PEP) stations to 65 total—31 PEP stations were either modernized or built since 2009. Anticipates a total of 77 PEP stations by fall 2013 directly covering 90 percent of the American people. Added satellite connectivity in 50 PEP stations. Integrated NOAA alerting systems to allow public-alerting authorities to send non- weather emergency messages through HazCollect; allowed NOAA to send mobile alerts beginning 2012. Released IPAWS online training for public-alerting authorities in December 2011. Hosts monthly webinars for developers and alerting practitioners. Conducted two statewide EAS tests in Alaska in January 2010 and 2011; conducted the first nationwide EAS test on November 9, 2011. Conducted CMAS test in New York City in December 2011. Conducts a required monthly test of CMAS on the third Wednesday of each month. Maintains a public website on IPAWS. Hosts monthly webinars for developers and alerting practitioners. Participated in federal working groups and roundtables. Participates in industry conferences, demonstrations, and panels. Acts as executive agent for EAS, maintaining the PEP stations. Maintains EAS and PEP stations. Deploying a dedicated PEP satellite network. Ensure the capability to distribute alerts on the basis of geographic location, risks, or personal user preferences. In addition to the individual named above, Sally Moino, Assistant Director; Andy Clinton; Jean Cook; Bert Japikse; Delwen Jones; Jennifer Kim; Josh Ormond; Carl Ramirez; Jerry Sandau; and Andrew Stavisky made key contributions to this report. | An effective system to alert the public during emergencies can help reduce property damage and save lives. In 2004, FEMA initiated IPAWS with the goal of integrating the nation's EAS and other public-alerting systems into a comprehensive alerting system. In 2009, GAO reported on long-standing weaknesses with EAS and FEMA's limited progress in implementing IPAWS. Subsequently, FEMA and FCC conducted the first-ever nationwide EAS test in November 2011. GAO was asked to review recent efforts to implement IPAWS and improve EAS. GAO examined: (1) how IPAWS capabilities have changed since 2009 and what barriers, if any, affect its implementation and (2) results of the nationwide EAS test and federal efforts to address identified weaknesses. GAO reviewed FEMA, FCC, and other documentation, and interviewed industry stakeholders and alerting authorities from six locations that were selected because they have public-alerting systems in addition to EAS and experienced problems during the nationwide EAS test. Since 2009, the Federal Emergency Management Agency (FEMA) has taken actions to improve the capabilities of the Integrated Public Alert and Warning System (IPAWS) and to increase federal, state, and local capabilities to alert the public, but barriers remain to fully implementing an integrated system. Specifically, IPAWS has the capability to receive and authenticate Internet-based alerts from federal, state, and local public authorities and disseminate them to the public through multiple systems. For example, since January 2012, publicalerting authorities can disseminate Emergency Alert System (EAS) messages through IPAWS to television and radio stations. Beginning in April 2012, alerting authorities have used IPAWS to transmit alerts via the Commercial Mobile Alert System interface to disseminate text-like messages to mobile phones. FEMA also adopted alert standards and increased coordination efforts with multiple stakeholders. Although FEMA has taken important steps to advance an integrated system, state and local alerting authorities we contacted cited a need for more guidance from FEMA on how to integrate and test IPAWS capabilities with their existing alerting systems. For example, an official with a state alerting authority said that additional guidance from FEMA is needed to determine what systems and policies should be put in place before integrating and testing IPAWS with other public alerting systems in the state's 128 counties and cities. In the absence of sufficient guidance from FEMA, states we contacted are reluctant to fully implement IPAWS. This reluctance decreases the capability for an integrated, interoperable, and nationwide alerting system. The Federal Communications Commission (FCC) required all EAS participants (e.g., broadcast radio and television, cable operators, satellite radio and television service providers, and wireline video-service providers) to submit a report to FCC by December 27, 2011, on the results of the nationwide EAS test. As of January 2013, 61 percent of broadcasters and cable operators had submitted the required report. Of those, 82 percent reported receiving the nationwide test alert, and 61 percent reported successfully retransmitting the alert to other stations, as required. Broadcasters' and cable operators' reception of the alert varied by state, from 6 percent in Oregon to 100 percent in Delaware. Key reasons for reception or retransmission difficulties included poor audio quality, outdated broadcaster-monitoring assignments, and equipment failure. For example, poor audio quality of the test alert resulted in some broadcasters' receiving a garbled and degraded audio message and others' receiving a duplicate alert that caused equipment to malfunction. According to FEMA officials, the poor audio quality is being addressed, in part, with the deployment of a dedicated satellite network that will become fully operational by fall 2013. However, at the time of our review, FEMA and FCC had taken few steps to address other problems identified in the nationwide test. Furthermore, while FCC rules call for periodic nationwide EAS testing, it is uncertain when the next test will occur. Without a strategy for regular nationwide testing of the relay distribution system, including developing milestones and timeframes and reporting on after-action plans, there is no assurance that EAS would work as intended should the President need to activate it to communicate with the American people. GAO recommends that FEMA work in conjunction with FCC to establish guidance for states to fully implement and test IPAWS components and implement a strategy for regular nationwide EAS testing. In response, the Department of Homeland Security (DHS) concurred with GAO's recommendations and provided examples of actions aimed at addressing the recommendations. DHS, FCC, and the Department of Commerce also provided technical comments, which have been incorporated as appropriate. |
In January 2001, we reported on Department of Defense management challenges and noted that the Department has had serious weaknesses in its management of logistics functions and, in particular, inventory management. We have identified inventory management as a high-risk area since 1990. Despite years of efforts to resolve its inventory problems, the Department still has spare parts shortages. (See app. I for examples from our reports on management weaknesses related to the Army’s spare parts shortages.) We are also reviewing Department of Defense’s practice of cannibalization of parts on aircraft; this report will be completed at a later date. In a separate report issued earlier this year, we indicated that current financial information did not show the extent to which funds were used for spare parts. The Department of Defense planned to annually develop detailed financial management information on spare parts funding uses but had not planned to provide it to the Congress. When we recommended that the Secretary of Defense routinely provide this information to the Congress as an integral part of the Department’s annual budget justification, the Department agreed to do so. The Department of Defense submits quarterly reports to the Congress regarding military readiness. The reports describe readiness problems and remedial actions, comprehensive readiness indicators for active components, and unit readiness indicators. The Army’s readiness reports provide assessments of its major systems, which include aircraft. The readiness goal for aircraft is to have 70 to 80 percent mission capable. The Apache (AH-64) is the Army’s main attack helicopter and is equipped to destroy, disrupt, or delay enemy forces. Originally produced in fiscal year 1982, it is designed to fight and survive during the day and night and in adverse weather throughout the world. The Blackhawk (UH-60), first fielded in 1978, primarily performs air assault, air cavalry, and medical evacuation missions. The Chinook (CH-47), first used in Vietnam in 1962, moves artillery, ammunition, personnel, and supplies on the battlefield. Figure 1 shows the Apache, Blackhawk, and Chinook helicopters. The Army’s spare parts include reparable and consumable parts. Reparable parts are expensive items, such as hydraulic pumps, navigational computers, and landing gear, that can be fixed and used again. The Aviation and Missile Command manages reparable parts. The Corpus Christi Army Depot and contractors repair helicopters and aviation reparable parts. The Defense Logistics Agency provides the Army consumable parts (e.g., nuts, bearings, and fuses), which are used extensively to fix reparable parts and aircraft, and manages a large part of the warehousing and distribution of reparable parts. The Defense Supply Center, Richmond, is the lead center for managing aviation consumable spare parts. Figure 2 shows the process for providing spare parts to Army helicopter units and the repair facilities. While the Apache, Blackhawk, and Chinook helicopters generally met their mission-capable goals during fiscal years 1999-2000, indicating that parts shortages have not affected mission capability, supply availability rates and the cannibalization of parts indicate that spare parts shortages have indeed been a problem. These parts shortages created inefficiencies in maintenance processes and procedures that have lowered morale of maintenance personnel. As shown in figure 3, during fiscal years 1996-2000, the three helicopters we reviewed generally met their mission-capable goals. In fiscal year 1996, the Blackhawk’s mission-capable rate was 79.25 percent, which according to an Aviation and Missile Command official, was just slightly below its readiness goal of 80 percent. Also, the Command official mentioned the Blackhawk probably did not exactly meet its mission-capable goal for many reasons, including several aviation safety action messages that were issued that year. These messages identified maintenance, technical, or general problems for which the safety condition of the aircraft had been determined to be a low to medium risk. The Chinook and the Apache did not meet their mission-capable goal of 75 percent in August and November 1999, respectively, when the entire fleet of helicopters was grounded because of “safety restrictions.” A safety restriction pertains to any defect or hazardous condition that can cause personal injury, death, or damage to aircraft, components, or repair parts for which a medium to high safety risk has been determined. The Chinook was grounded because of a cracked gear in the transmission, which was already in short supply before the safety restriction. The gear changes the direction of power from the engine and reduces the speed that turns the rotor blades (see fig. 4). The Apache helicopters were grounded because of transmission clutch failures. According to an Army official, the clutch engages and disengages the gears in the transmission. Also, Aviation and Missile Command officials mentioned the grounding of these helicopters created demands for parts that the wholesale system did not have available. The safety concerns coupled with the lack of spare parts contributed to these helicopters’ failure to meet their mission-capable goals. As shown in figure 5, during fiscal years 1999-2000, parts for the Apache and Blackhawk helicopters seldom met the Army’s supply availability goal of 85 percent. The supply availability rate is the percentage of requisitions filled at the wholesale inventory level. The goal is designed to measure the overall effectiveness of the wholesale system. While the Blackhawk met the supply availability goal only twice during the 2-year period, the Apache never met the goal. We identified several reasons for spare parts shortages, which will be discussed later. To compensate for the lack of spare parts, maintenance personnel use cannibalizations or substitutions of parts from one aircraft to another. According to the Army Aviation Maintenance Field Manual 3-04.500, cannibalization is done when, among other things, (1) the aircraft from which the exchanged parts will be used is grounded and awaiting repair parts; (2) needed repair parts are on order before the cannibalization; (3) the parts will return the other aircraft to a mission-capable status; and (4) all possible alternatives (local procurement or manufacturers) have been tried without success. A January 2000 aviation logistics study showed that cannibalization is an accepted maintenance practice at the unit level to return aircraft to mission-capable status. According to a Fort Campbell 101st Airborne Division official, the principal reason for cannibalizations is the nonavailability of serviceable repair parts. The results from our spare parts review showed that cannibalizations at Fort Campbell were done on the Apache and Blackhawk main fuel controls, the Blackhawk engines, and the Chinook rotary wing head. The rotary wing head is the main assembly of the rotor system that produces lift, thrust, and directional control needed for helicopter flight. Figure 6 shows the rotary wing head. Fort Campbell’s contractor maintenance personnel also used cannibalizations on the Apache housing assembly and actuator bracket and the Chinook aircraft access door. The actuator bracket anchors the servocylinder to the aircraft. Although the previous examples show units’ reliance on cannibalization to overcome the unavailability of parts, the practice does not resolve spare parts shortages. According to the Army’s Deputy Chief of Staff for Logistics, supply shortages, which are masked through extensive use of cannibalizations, are a continuing problem the Army is working to resolve. As we testified in May 2001, according to Army officials, only a small portion of Army cannibalizations are reported (only for serial-numbered parts). The Army does not track cannibalizations servicewide and does not require subordinate commands to do so. Therefore, the full extent to which this practice is used is unknown. However, the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001 (P.L. 106-398, sec. 371) requires the Department of Defense to measure, on a quarterly basis, the extent to which units remove usable parts, supplies, or equipment from one vehicle, vessel, or aircraft in order to render a different system operational. The Department is working to establish definitions, standards, and a shared framework for the collection and reporting of data on cannibalization. The first report of these data is targeted for the April-June 2001 Quarterly Readiness Report to the Congress. Although cannibalization may keep aircraft flying, it is not an efficient practice. According to the January 2000 aviation logistics study, this practice doubles the hours dedicated to a single maintenance effort. With limited hours available to conduct repairs and maintenance, the duplication of effort is a significant factor in whether or not to use the practice. Also, as we testified in May 2001, this practice requires at least twice the maintenance time of normal repairs because it involves removing and installing components from two aircraft instead of one (see fig. 7). Additionally, when a mechanic removes a part from an aircraft to place on another one, the risk of damaging the aircraft and/or the “good” part in the process is magnified. As we testified in May 2001, evidence suggests that cannibalizations have negatively affected morale because they are sometimes seen as routinely making unrealistic demands on maintenance personnel. According to a Fort Campbell official, the added workload of cannibalization detracts from the quality of life for aircraft maintenance soldiers. Also, an Army official said the added workload degrades maintenance soldiers’ morale.Cannibalizations may need to be quickly performed at any time, day or night, to meet operational commitments. In such cases, personnel must continue working until the job is done, regardless of how much time it takes. Further, in August 1999 we reported that the majority of factors cited by military personnel as sources of dissatisfaction and reasons for leaving the military were work-related circumstances such as the lack of parts and materials to successfully complete daily job requirements. Our review showed that the primary reasons for shortages of spare parts for the Apache, Blackhawk, and Chinook helicopters were demands not anticipated for parts and delays in obtaining parts from a contractor. Also, problems concerning overhaul and maintenance of spare parts created shortages. A contributing factor, which was not identified in our review but which Army and Defense Logistics Agency officials acknowledged, was the difficulty in obtaining parts for these aging helicopters because original manufacturers may no longer be in business. We selected for review 90 spare parts for the Apache (32 parts), Blackhawk (34 parts), and Chinook (24 parts) helicopters. Officials at the units and repair facilities identified shortages of these 90 parts as not being available to complete repairs. (See app. II for a list of these parts.) Table 1 shows the reasons for the shortages, by helicopter, for the 90 spare parts we reviewed. The major reason for the shortages of the 90 spare parts we reviewed was that demands for parts were not anticipated due to unforeseen safety concerns, the recalculation of parts’ useful life, and other sudden increases in demands. The Army and the Defense Logistics Agency forecast the demand for parts using past data on the usage of parts, when available. According to an Army document, a demand that was not anticipated results in the need for parts that the Army had not planned for when determining requirements for parts. A June 2000 Army Audit Agency report also cited demands that were not anticipated as a main factor causing parts shortages. Parts identified as causing safety problems resulted in unanticipated demands for spare parts and created shortages. For example, according to a safety message, because a cracked gear in a Chinook transmission was discovered during an overhaul, the entire fleet was grounded in August 1999. According to an Aviation and Missile Command item manager, units sent transmissions suspected of having problems to the Corpus Christi Army Depot for repair. Also, the item manager mentioned the safety issue exacerbated an already existing condition because the Command never had enough transmissions on hand to meet the average monthly demand. Causes of this condition were identified as long lead times to (1) award contracts and (2) manufacture and repair transmissions. Since the safety concerns, the demands have increased significantly; and as of March 2001, 75 transmissions were on back order. Similarly, according to a February 2000 safety message, an engineering analysis indicated that the retirement life for the Apache main rotor blade attach pins (see fig. 8) needed to be reduced because specific pins might not provide the proper fit and would result in significant degradation of the pins’ life due to fatigue. (The pins attach the helicopter blade to the main rotor.) Units were required to inspect all main rotor pins and replace defected pins with new ones that last longer. In June 2000, demands for the pins increased, and the Command’s record shows that 81 pins were on back order and 14 pins were on hand to support the average monthly demands for 30 pins. The recalculated useful life of parts also resulted in unanticipated demands for parts that the Army had not planned for and created shortages. According to an Aviation and Missile Command team leader of the Apache Systems Engineering Office, the useful life of the Apache’s housing assembly (see fig. 9) and rotor damper (see fig. 10) changed because the Command conducted test flights that recorded the accurate fatigue factors for parts. The official said that recalculating the parts’ useful life based on accurate data instead of estimates reduced their useful life. In August 2000, the Aviation and Missile Command records show that the Command had only three usable Apache housing assemblies on hand when its useful life was recalculated and reduced from 1,981 to 1,193 hours, about a 40-percent reduction. As a result, repairs had to be made more frequently, and there were more demands for the housing assemblies than were available. Similarly, the Command’s record shows that the Command had only seven usable Apache rotor dampers on hand when its useful life was reduced from 3,710 to 2,057 hours, about a 45-percent reduction. In September 2000, demands for the rotor dampers increased, and the Command’s record shows that 53 were on back order and the average monthly demand was 65. Finally, parts that were ordered more frequently than expected caused shortages when increases in demand for the items were not anticipated. For example, field units’ demands for bearings used on the Blackhawk helicopters outpaced the contractor’s production. According to an Aviation and Missile Command item manager, a new bearing was introduced in 1996. The Command’s records show that before the new, improved bearing was introduced, units replaced this bearing every 70 hours. This new bearing lasts 4,000 hours and the contractor could not produce enough to meet the demand. In March 2000, Army records show that 976 bearings were on back order. According to a Command item manager, the increase in demand for the bearings occurred because (1) units were stockpiling the bearings and (2) the parts were being replaced worldwide on all Blackhawk helicopters because they lasted longer. The item manager stated the units were no longer ordering excessive quantities of bearings and that as of May 2001, there were 300 on hand and 423 on back order. Likewise, the Command experienced a 25-percent increase in demands for Apache fuel boost pumps (see fig. 11). A Command team leader for the Apache airframe was uncertain as to what caused the surge in demand but commented that it was not unusual for parts to fail because of the aircraft’s age. The Command’s record shows that unexpected failures of motors occurred during repairs, which delayed production of fuel boost pumps to meet increased demands. Also, the Command’s record shows that in October 2000, there were 45 back orders and no usable fuel boost pumps on hand to meet the average monthly demand for three. Poor contractor performance and delays in negotiating a contract also resulted in parts shortages. For example, Defense Logistics Agency records show that as a result of a contractor’s late deliveries of Apache shear bolts, the Agency did not have the parts available for Apache users. Agency records show that the contract was terminated and another one was awarded to a different contractor. Also, Army records show that the Command had difficulty negotiating with a sole-source contractor to provide Apache servocylinders at reasonable prices. Because of the time it took the Aviation and Missile Command to award the contract, the parts were not provided to the users when needed. Due to a shortage of parts, the Corpus Christi Army Depot experienced problems that prevented it from repairing and overhauling aviation parts in a timely manner. In May 1999, the Corpus Christi Army Depot received a requirement to overhaul 20 Blackhawk T-700 engines (see fig. 12). In July 1999, the depot received the fiscal year 2000 requirement to overhaul 30 engines, which increased to 65 in October 1999 and to 80 in December 1999. Because of these increases, the depot did not have enough time to determine the parts needed to support the overhaul requirements and the parts were not available to complete repairs in a timely manner. Also, the depot did not have the personnel available to respond quickly to the dramatic increases in overhaul requirements and thus the Depot could not repair parts in a timely manner. Further, in June 2000 an Aviation and Missile Command record showed that the average demand for Blackhawk T-700 engines was 7 per month, 66 engines were on back order, and 249 engines needed to be repaired. Another maintenance problem we identified was a shortage of parts used to repair cold section modules, a compressor section in the T-700 engine. The repair of cold section modules was also impacted by the need for personnel to support the overhauls of Blackhawk T-700 engines. Aircraft age was not a reason for the 90 spare parts shortages we reviewed. However, Army and Defense Logistics Agency officials informed us the age of the Apache, the Blackhawk, and the Chinook is a factor contributing to parts shortages for these systems. The aircraft were originally developed in the 1980s, 1970s, and 1960s, respectively, and they are expected to be useful for a number of years. The Commander of the Army Materiel Command said in 1999 that the Army expects to maintain an upgraded model of the almost 40-year-old Chinook for an additional 30 years. He added that because of the aircraft’s ages, parts consumption increases, inventory is depleted, cannibalization is necessary, and procurement costs of replenishment stocks increase. According to the Defense Logistics Agency’s November 2000 Aging Aircraft Program Management Plan, because of the extended age of these systems, the Army is concerned about the degradation of their structural integrity and the hard-to-find structural and electrical parts. Also, according to a report prepared for the Defense Microelectronics Activity, manufacturing sources for spare parts can be diminished because of uneconomical production requirements and the limited availability or increasing cost of items and raw materials used in the manufacturing process. Army and Defense Logistics Agency officials commented, and the plan states, that this issue is serious because the original contractors that produced some spare parts for aging weapon systems may no longer be in business or may have upgraded their production lines to accommodate technologically advanced parts. However, we did not find this factor to be a reason for the shortages of the parts we reviewed. The Army and the Defense Logistics Agency have initiatives under way or planned to revolutionize and integrate logistics processes, upgrade aging aircraft, and improve the supply of aviation parts. The concept for the initiatives generally addresses the reasons we identified for spare parts shortages. The Army has developed a Strategic Logistics Plan intended to integrate the modernization and transformation of logistics processes throughout many organizations. The Army initiatives we identified are linked to the plan’s asset management process, which is designed to match available assets with needs, identify shortages of assets, and interface with government and industry suppliers to buy additional assets. We have previously reported problems with the way Army has implemented its logistics initiatives and recommended that it develop a management framework for its initiatives, to include a comprehensive strategy and performance plan. The Army has actions under way to address the recommendation; therefore, we are not making any additional recommendations at this time. The various Army-wide, Army Materiel Command, and Defense Logistics Agency initiatives are described in the following sections. Among the efforts the Army has under way to improve the availability of spare parts are its Strategic Logistics Plan, Logistics Transformation Plan, Single Stock Fund, Velocity Management, and National Maintenance Program. The Army has developed a Strategic Logistics Plan intended to integrate the modernization and transformation of logistics processes throughout many organizations. Under its Strategic Logistics Plan, the Army hopes to change from its current reactive approach to one that is more effective, efficient, and responsive. The initiatives planned or under way that are designed to resolve spare parts shortages are linked to the asset management process under the Army’s planned change in approach. The plan was last updated on May 11, 2000, to show how the Army will achieve its synchronization goals by meeting the requirements of the Government Performance and Results Act (P.L. 103-62 (1993)). The next update is planned for the fall 2001 and is to include a timeline with milestones and metrics to track, measure, and better manage the transformation process. In September 1999, we recommended that the Army develop a management framework to include a comprehensive strategy and a performance plan for implementing its initiatives. In March 2000, the Department of Defense issued Defense Reform Initiative 54, which requires each military service to submit an annual logistics transformation plan. The purpose of this plan is to document, on an annual basis, the planned actions and related resources for implementing logistics initiatives, including actions that directly support the Department’s Logistics Strategic Plan. Initiative 54 requires that the services’ transformation plans include each of the key management framework elements specified in our prior reports. In response to our previous recommendation, in May 2000 the Army decided to combine preparation of its Strategic Logistics Plan with its response to Defense Reform Initiative 54. In July 2000, the Army developed its Logistics Transformation Plan in response to initiative 54. However, we did not evaluate this plan to determine whether its management framework included a comprehensive strategy and performance plan. Since the Army is taking actions on our previous recommendation to develop a management framework, we are not making new recommendations at this time. We are now reviewing the adequacy of the strategic logistics planning process within the Department of Defense and component commands, and this review will include the services’ logistics transformation plans. This report will be completed later this year. The Army’s single stock fund is a business process reengineering initiative to improve the availability of secondary items logistics and financial processes in the Army Working Capital Fund, Supply Management business area. The fund is aimed at improving the availability of spare parts by, among other things, (1) providing worldwide access to parts down to the installation levels, (2) consolidating separate national-level and retail elements into a single fund, and (3) integrating logistics and financial automated information systems. In 1987 the Army began to study its stock fund operations. The Army’s single stock fund program campaign plan was approved by the Vice Chief of Staff in November 1997, and during the first quarter of fiscal year 2002, the Army plans to transfer all stocks, which include wholesale and retail inventories, to the single management by Army Materiel Command. In September 1995, the Army established its Velocity Management Program to develop a faster, more flexible, and more efficient logistics pipeline. The program’s goals, concept, and top management support parallel improvement efforts in private sector companies. The program’s overall goal is to eliminate unnecessary steps in the logistics pipeline that delay the flow of supplies through the system. The program consists of Army-wide process improvement teams for the following four areas: the ordering and shipping of supplies, repair cycle, inventory levels and locations (also known as stockage determination), and financial management. This Army-wide initiative, which was announced in July 1999, is designed to maximize repair capabilities and optimize the use of available resources at all maintenance levels within the Army. The initiative centralizes the management of all Army sustainment maintenance programs while decentralizing the actual repair of components and parts. The workload will be distributed across depot and installation activities, and repairs will be made based on national need for an item. Additionally, the Army plans to upgrade its aging aircraft through its Recapitalization Program (a part of the National Maintenance Program), which it will achieve by overhauling components of and upgrading its aircraft. The purpose of this program is to (1) extend aircraft service life; (2) reduce operating and support costs; (3) improve reliability, maintainability, safety, and efficiency; and (4) enhance capability. A limited number of weapon systems will begin this process in fiscal year 2002, with full-scale upgrades beginning in fiscal year 2003. The Apache, Blackhawk, and Chinook helicopters have been identified as candidates for the program. The Army Materiel Command has several initiatives under way to help resolve spare parts shortages, including (1) identifying processes for forecasting requirements for spare parts, (2) analyzing the spare parts program to identify issues that affect aviation spare parts shortages, and (3) working with contractors to provide spare parts. These initiatives are separate from those in the Army’s Strategic Logistics Plan. In July 2000, the Army Materiel Command established the Forecasting and Support Techniques Working Group to identify processes for forecasting requirements for spare parts and to develop a plan to resolve any identified problems. The Army uses forecasting to develop quantity and resource requirements for inventory. Its basic principles are to maintain current data on customer demand, lead times for obtaining parts, internal process costs, stock levels, and replenishment of parts in a timely manner. In January 2001, the working group had prioritized several issues for its review. In August 2000, the Army Materiel Command established the Spare Parts Shortages Integrated Process Team to analyze the spare parts program and to initially focus on aviation parts managed by the Aviation and Missile Command. The team identified issues that have affected spare parts shortages, including (1) an increase in demands that led to reduced availability of reparable parts; (2) understated times for administration and production of spare parts, which resulted in the reduced availability of consumable and reparable parts; and (3) changes in requirements as the result of problems with parts that affected aircraft safety and readiness and minimally affected the availability of spare parts. The team recommended the issues be used to influence the next budget submission. The Aviation and Missile Command is attempting to help resolve spare parts shortages by establishing partnerships with key contractors to reduce the time it takes to provide spare parts once a need has been identified. The Aviation and Missile Command focuses on ensuring that the prime contractors’ focus is maintained on readiness, lead times, spare parts reliability, and rapid response to customer needs. Among the efforts the Defense Logistics Agency has under way to improve the availability of spare parts are its Aviation Investment Strategy, Aging Aircraft Program, and contracts for consumable parts. The Defense Logistics Agency’s major initiative to resolve aircraft spare parts shortages is its Aviation Investment Strategy. This fiscal year 2000 initiative focuses on replenishing consumable aviation repair parts that have been identified as having availability problems that affect readiness. To achieve this initiative, the Agency plans to invest $17.3 million in aviation spare parts for the Army from fiscal years 2000 through 2003. As of fiscal year 2000, about $4.8 million had been obligated for this purpose. The purpose of the Defense Logistics Agency’s Aging Aircraft Program is to consistently meet the goals for spare parts availability for the Army, Navy, and Air Force aviation weapon systems. The program’s focus will be to (1) provide inventory control point personnel with complete, timely, and accurate information on current and projected parts requirements; (2) reduce customers’ wait times for parts for which sources or production capability no longer exist; and (3) create an efficient and effective program management structure and processes that will achieve the stated program goals. The Aging Aircraft Program Management Plan was issued in November 2000, and the Agency plans to invest about $20 million on this program during 2001-2007. The Defense Supply Center Richmond has a 2-year contract with an option for 3 years with one contractor and a 5-year contract with another contractor for consumable Army aircraft spare parts. According to a Defense Supply Center Richmond document, the use of best commercial practices will benefit aircraft users through improved delivery schedules and reduced inventory storage and administrative costs. In written comments on a draft of this report, the Principal Assistant Deputy Under Secretary of Defense for Logistics and Materiel Readiness indicated that the Department of Defense generally concurred with the report. The Department’s comments are reprinted in their entirety in appendix III. To determine the impact spare parts shortages had on three selected Army helicopters, we obtained and reviewed (1) Department of Defense Quarterly Readiness Reports to the Congress for April 1999 through September 2000 and (2) additional readiness data from the Army’s Deputy Chief of Staff for Logistics, Arlington, Virginia. Additionally, we had discussions with officials at the Army Materiel Command, Alexandria, Virginia. We did not independently verify the readiness data. We selected the three helicopters for review because the helicopters experienced spare parts shortages during fiscal year 2000. To determine whether selected helicopters met supply availability goals, we obtained and reviewed the Army Materiel Command’s fiscal year 1999-2000 supply availability rates for the Apache, Blackhawk, and Chinook helicopters. We did not independently verify the supply availability data. To determine why the helicopters experienced spare parts shortages, we interviewed officials at the Army Aviation and Missile Command, Huntsville, Alabama, and reviewed selected Army safety messages from August 1999 through February 2000 to identify the parts that caused the safety concerns. To determine the impact of parts shortages on maintenance practices and personnel, we reviewed the Army regulation on materiel policy and retail maintenance operations and an Army study on cannibalizations. We also reviewed our previous work on how cannibalizations adversely affect personnel and maintenance and our report that cited the lack of spare parts as hampering retention of military personnel. Additionally, we interviewed an official at the 101st Airborne Division, Fort Campbell, Kentucky, on the impact cannibalizations had on maintenance. To determine the reasons for the shortages of spare parts for the Apache, Blackhawk, and Chinook helicopters, we obtained computerized lists of spare parts that caused the helicopters to be not mission capable from the Army Aviation and Missile Command from October 1999 through July 2000 and from the Defense Supply Center, Richmond, Virginia, for fiscal year 2000. Also, we visited and obtained lists of spare parts shortages that caused delays in repairing helicopters from (1) Fort Campbell’s 101st Airborne Division as of September 13, 2000; (2) Fort Campbell’s Aviation Logistics Management Division, DynCorp Aerospace Operations, as of April 18, 2000; and (3) the Corpus Christi Army Depot, Corpus Christi, Texas, as of August 23, 2000. From the lists, we selected all 15 parts from the Fort Campbell’s 101st Airborne Division and randomly selected 75 spare parts from the other locations for the Apache (32), Blackhawk (34), and Chinook (24) for further review (a total of 90 parts). Because of the size of our sample, we did not project the results of the sample to the universe of all helicopters’ parts shortages. Once we identified the 90 spare parts shortages, we provided them to the inventory control points, the Army Aviation and Missile Command, and the Defense Supply Center, Richmond, to obtain their reasons for the shortages along with supporting documentation. To determine whether the aging of the aircraft contributed to spare parts shortages, we reviewed congressional Army testimony and documentation from the Defense Logistics Agency, Fort Belvoir, Virginia, and interviewed Army and Defense Logistics Agency officials. To determine whether management weaknesses contributed to spare parts shortages, we reviewed our prior reports on Army and Department of Defense inventory and financial management problems. To determine what overall actions are planned or under way to address spare parts shortages for Army aircraft, we visited and obtained documentation and views from program officials at the Army’s Office of the Deputy Chief of Staff, Logistics; the Army Materiel Command; the Army Aviation and Missile Command; the Defense Logistics Agency; and the Defense Supply Center, Richmond. We also compared the reasons for the spare parts shortages we found with the overall initiatives under way or planned to determine whether they were being addressed. We did not review the plans or the specific initiatives. Our review was performed from August 2000 to June 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Army; the Director, Defense Logistics Agency; and the Director, Office of Management and Budget. We will make copies available to other interested parties upon request. Please contact me at (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Lawson Gist, Jr.; Jose Watkins; Carleen Bennett; and Nancy Ragsdale. In January 2001, we reported that the Department of Defense had serious weaknesses in its management of logistics functions and, in particular, inventory management. Although not specifically identified with the systems we reviewed, these management weaknesses directly or indirectly contribute to the shortage of spare parts the Army is facing. For example: We reported in April 1997 that the Army needed to improve its logistics pipeline for aviation parts and reduce logistics costs by incorporating private sector best practices. We found that the Army’s repair pipeline was slow, unreliable, and inefficient. One contributing factor was a lack of consumable parts needed to complete repairs. We reported in October 1997 that the Army needed to improve its management of the weapon system and equipment modification program to eliminate difficulties in obtaining spare parts. We found that program sponsors had been inconsistent in providing initial spare parts and ensuring spare parts were added to the supply system. We reported in June 2000 that the Army needed to strengthen and follow procedures to control shipped items, which include spare parts and other inventory items. We found that the Army did not know the extent to which shipped inventory had been lost or stolen because of weaknesses in its inventory control procedures and financial management practices. In addition, the Department of Defense’s long-standing financial management problems may contribute to the Army’s spare parts shortages. As we recently reported, weaknesses in inventory accountability information can affect supply responsiveness. Lacking reliable information, the Department of Defense has little assurance that all items purchased are received and properly recorded. The weaknesses increase the risk that responsible inventory item managers may request funds to obtain additional, unnecessary items that may be on hand but not reported. Blackhawk part 1. Actuating cylinder 2. Bearing ball 3. Bearing plan 4. Cable assembly 5. Circuit card assembly 6. Connecting link 7. Digital microcircuit 8. Fuel tank 9. G axis seal kit 10.Gear box assembly 11. Magnetic compass 12. Metallic tube 13. Packing with retain 14. Pipe hanger 15. Preformed packing 16. Pressurizing 17. Protective dust cap 18. Repair kit 19. Sas actuator assembly 20. Shaft assembly 21.Shaft fitting 22. Solid rivet 23. Tubeless tire 24. Armored wing assembly 25. Belt aircraft safety 26. Electro actuator 27. Roller bearing 22. Assembly actuator bracket 23. Left-hand nacelle 24. Modification kit 25. Mounting bracket 26. Power supply 27. Servocylinder28. Servocylinder29. Servocylinder30. Servocylinder31. Shear bolt 32. Shock strut assembly 14. Aircraft access door 15. Annular bearing ball 16. Control swashplate 17. Hydraulic cylinder 18. Shouldered shaft 19. Time totalizator meter 28.Cold section module 29. Engine aircraft 30. Main fuel control 31. T-700 engine aircraft 20. Aircraft engine 21. Rotary wing head 32. Cylinder assembly 33. Flutter dampener 34. Multimeter 22. Close tolerance bolt 23. Plastic spir tubing 24. Sleeve bushing There were multiple reasons for parts shortages, but for the purposes of our analysis, we used the most predominant reason. Defense Logistics: Information on Apache Helicopter Support and Readiness (GAO-01-630, July 17, 2001). Defense Inventory: Opportunities Exist to Expand the Use of Defense Logistics Agency Best Practices (GAO/NSIAD-00-30, Jan. 26, 2000). Army Logistics: Status of Proposed Support Plan for Apache Helicopter (GAO/NSIAD-99-140, July 1, 1999). Defense Inventory: Status of Inventory and Purchases and Their Relationship to Current Needs (GAO/NSIAD-99-60, Apr. 16, 1999). Defense Inventory: DOD Could Improve Total Asset Visibility Initiative With Results Act Framework (GAO/NSIAD-99-40, Apr. 12, 1999). Major Management Challenges and Program Risks: Department of Defense (GAO/OCG-99-4, Jan. 1, 1999). Defense Depot Maintenance: Use of Public-Private Partnering Arrangements (GAO/NSIAD-98-91, May 7, 1998). Inventory Management: DOD Can Build on Progress by Using Best Practices for Reparable Parts (GAO/NSIAD-98-97, Feb. 27, 1998). Defense Inventory: Management of Surplus Usable Aircraft Parts Can Be Improved (GAO/NSIAD-98-7, Oct. 2, 1997). Inventory Management: The Army Could Reduce Logistics Costs for Aviation Parts by Adopting Best Practices (GAO/NSIAD-97-82, Apr. 15, 1997). | The military's ability to carry out its mission depends on its having adequate supplies of spare parts on hand for maintenance and repairs. Shortages are a key indicator that the billions of dollars being spent on these parts are not being used effectively, efficiently, and economically. Despite additional funding from Congress, the Army still has concerns about spare parts shortages. Spare parts shortages for the Apache, Blackhawk, and Chinook helicopters have harmed operations and lowered morale among maintenance personnel. Cannibalization of parts from one aircraft to another is an inefficient practice that results in double work for the maintenance personnel, masks parts shortages, and lowers morale. Parts were unavailable for various reasons, including higher-than-expected demand for parts, delays in obtaining parts from contractors, and problems with overhaul and maintenance. Another factor contributing to the shortage was the Army's inability to obtain parts for these aging aircraft from the original manufacturers, which sometimes had gone out of business. The Army and the Defense Logistics Agency have efforts planned or underway to improve the availability of aviation spare parts. Once these initiatives are further along, GAO will review them to determine whether they can be enhanced. |
Borrowers obtain residential mortgages through either mortgage lenders or brokers. Mortgage lenders can be federally or state-chartered banks or mortgage lending subsidiaries of these banks or of bank holding companies. Independent lenders, which are neither banks nor affiliates of banks, also may fund home loans to borrowers. Mortgage brokers act as intermediaries between lenders and borrowers, and for a fee, help connect borrowers with various lenders that may provide a wider selection of mortgage products. Federal banking regulators—Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), National Credit Union Administration (NCUA), and Office of Thrift Supervision (OTS)— have, among other things, responsibility for ensuring the safety and soundness of the institutions they oversee. To pursue this goal, regulators establish capital requirements for banks; conduct on-site examinations and off-site monitoring to assess their financial conditions; and monitor their compliance with applicable banking laws, regulations, and agency guidance. As part of their examinations, for example, regulators review mortgage lending practices, including underwriting, risk-management, and portfolio management practices, and try to determine the amount of risk lenders have assumed. From a safety and soundness perspective, risk involves the potential that either anticipated or unanticipated events may have an adverse impact on a bank’s capital or earnings. In mortgage lending, regulators pay close attention to credit risk—that is the concerns that borrowers may become delinquent or default on their mortgages and that lenders may not be paid in full for the loans they have issued. Certain federal consumer protection laws, including the Truth in Lending Act and its implementing regulation, Regulation Z, apply to all mortgage lenders and brokers that close loans in their own names. Each lender’s primary federal supervisory agency has responsibility for enforcing Regulation Z and generally uses examinations and consumer complaint investigations to check for compliance with both the act and its regulation. In addition, the Federal Trade Commission (FTC) is responsible for enforcing certain federal consumer protection laws for brokers and lenders that are not depository institutions, including state-chartered independent mortgage lenders and mortgage lending subsidiaries of financial holding companies. However, FTC is not a supervisory agency. FTC uses a variety of information sources in the enforcement process, including FTC investigations, consumer complaints, and state and federal agencies. State banking and financial regulators are responsible for overseeing independent lenders and mortgage brokers and generally do so through licensing that mandates certain experience, education, and operations requirements to engage in mortgage activities. States also may examine independent lenders and mortgage brokers to ensure compliance with licensing requirements, review their lending and brokerage functions, and look for unfair or unethical business practices. In the event such practices or consumer complaints occur, regulators and attorneys general may pursue actions that include license suspension or revocation, monetary fines, and lawsuits. From 2003 through 2005, AMP lending grew rapidly, with originations increasing threefold from less than 10 percent of residential mortgages to about 30 percent. Most of the originations during this period consisted of interest-only ARMs and payment-option ARMs, and most of this lending occurred in higher-priced regional markets concentrated on the East and West Coasts. For example, based on data from mortgage securitizations in 2005, about 47 percent of interest-only ARMs and 58 percent of payment- option ARMs were originated in California, which contained 7 of the 20 highest-priced metropolitan real estate markets in the country. On the East Coast, Virginia, Maryland, New Jersey, and Florida as well as Washington, D.C., exhibited a high concentration of AMP lending in 2005. Other examples of states with high concentrations of AMP lending include Washington, Nevada, and Arizona. These areas also experienced higher rates of home price appreciation during this period than the rest of the United States. In addition to this growth, the characteristics of AMP borrowers have changed. Historically, AMP borrowers consisted of wealthy and financially sophisticated borrowers who used these specialized products as financial management tools. However, today a wider range of borrowers use AMPs as affordability products to purchase homes that might otherwise be unaffordable using conventional fixed-rate mortgages. Although AMPs have increased affordability for some borrowers, they could lead to increased payments or “payment shock” for borrowers and corresponding credit risk for lenders. Unless the mortgages are refinanced or the properties sold, AMPs eventually reach points when interest-only and deferred payment periods end and higher, fully amortizing payments begin. Regulators and consumer advocates have expressed concern that some borrowers might not be able to afford these higher monthly payments. To illustrate this point, we simulated what would happen to a borrower in 2004 that made minimum monthly payments on a $400,000 payment-option ARM. As figure 1 shows, the borrower could see payments rise from $1,287 to $2,931, or 128 percent, at the end of the 5-year payment- option period. In addition, with a wider range of AMP borrowers now than in the past, those with fewer financial resources or limited equity in their homes might find refinancing their mortgages or selling their homes difficult, particularly if their loans have negatively amortized or their homes have not appreciated in value. In addition, borrowers who cannot afford higher payments and may become delinquent or default on their mortgages may pose credit risks to lenders because these borrowers may not repay their loans in full. Lenders also may have increased risks to themselves and their customers by relaxing underwriting standards and through risk-layering. For example, some lenders combined AMPs with less stringent income and asset verification requirements than traditionally permitted for these products or lent to borrowers with lower credit scores and higher debt-to-income ratios. Although regulatory officials have expressed concerns about AMP risks and underwriting practices, they said that banks and lenders generally have taken steps to manage the resulting credit risk. Federal and state banking regulatory officials and lenders with whom we spoke said most banks have diversified their assets to manage the credit risk of AMPs held in their portfolios, or have reduced their risk through loan sales or securitization. In addition, federal regulatory officials told us that while underwriting trends may have loosened over time, lenders have generally attempted to mitigate their risk from AMP lending. For example, OCC and Federal Reserve officials told us that most lenders qualify payment-option ARM borrowers at fully indexed rates, not at introductory interest rates, to help ensure that borrowers have financial resources to manage future mortgage increases, or to pay more on their mortgages than the minimum monthly payment. OCC officials also said that some lenders may mitigate risk by having some stricter criteria for AMPs than for traditional mortgages for some elements of their underwriting standards. Although we are encouraged by these existing risk mitigation and management strategies, most AMPs issued between 2003 and 2005, however, have not reset to require fully amortizing payments, and it is too soon to tell how many borrowers will eventually experience payment shock or financial distress. As such, in our report we agree with federal regulatory officials and industry participants that it was too soon to tell the extent to which AMP risks may result in delinquencies and foreclosures for borrowers and losses for banks that hold AMPs in their portfolios. However, we noted that past experience with these products may not be a good indicator for future AMP performance because the characteristics of AMP borrowers have changed. Regulatory officials and consumer advocates expressed concern that some AMP borrowers may not be well-informed about the terms and risks of their complex AMP loans. Obstacles to understanding these products include advertising that may not clearly or effectively convey AMP risks, and federal mortgage disclosure requirements that do not require lenders to tailor disclosures to the specific risks of AMPs to borrowers. Marketing materials that we reviewed indicated that advertising by lenders and brokers may not clearly provide information to inform consumers about the potential risks of AMPs. For example, one advertisement we reviewed promoted a low initial interest rate and low monthly mortgage payments without clarifying that the low interest rate would not last the full term of the loan. In other cases, promotional materials emphasized the benefits of AMPs without effectively explaining the associated risks. Some advertising, for example, emphasized loans with low monthly payment options without effectively disclosing the possibility of interest rate changes or mortgage payment increases. One print advertisement we reviewed for a payment- option ARM emphasized the benefit of a low initial interest rate but noted in small print on its second page that the low initial rate applied only to the first month of the loan and could increase or decrease thereafter. Regulatory officials noted that current Regulation Z requirements address traditional fixed-rate and adjustable-rate products, but not more complex products such as AMPs that feature risks such as negative amortization and payment shock. To better understand the quality of AMP disclosures, we reviewed eight interest-only and payment-option ARM disclosures provided to borrowers from federally regulated lenders. These disclosures were provided to borrowers between 2004 and 2006 by six federally regulated lenders that collectively made over 25 percent of the interest- only and payment option ARMs produced in 2005.We found that these disclosures addressed current Regulation Z requirements, but some did not provide full and clear explanations of AMP risks such as negative amortization or payment shock. For example, as shown in figure 2, the disclosure simply states that monthly payments could increase or decrease on the basis of interest rate changes, which may be sufficient for a traditional ARM product, but does not inform borrowers about the potential magnitude of payment change, which may be more relevant for certain AMPs. In addition, most of the disclosures we reviewed did not explain that negative amortization, particularly in a rising interest rate environment, could cause AMP loans to reset more quickly than borrowers anticipated and require higher monthly mortgage payments sooner than expected. In addition, the AMP disclosures generally did not conform to leading practices in the federal government, such as key “plain English” principles for readability or design. For example, the Securities and Exchange Commission’s “A Plain English Handbook: How to Create Clear SEC Disclosure Documents (1998)” offered guidance for developing clearly written investment product disclosures and presenting information in visually effective and readable ways. The sample disclosures we reviewed, however, were generally written with language too complex for many adults to fully understand. Most of the disclosures also used small, hard-to- read typeface, which when combined with an ineffective use of white space and headings, made them even more difficult to read and buried key information. Federal banking regulators have taken a range of actions—including issuing draft interagency guidance, seeking industry comments, reinforcing messages about AMP risks and guidance principles in many forums, and taking other individual regulatory actions—to respond to concerns about the growth and risks of AMP lending. Federal banking regulators issued draft interagency guidance in December 2005 that recommended prudent underwriting, portfolio and risk management, and information disclosure practices related to AMP lending. The draft guidance calls for lenders to consider the potential impact of payment shock on borrowers’ capacity to repay their mortgages and to qualify borrowers on their ability to make fully amortizing payments on the basis of fully indexed interest rates. It also recommends that lenders develop written policies and procedures that describe portfolio limits, mortgage sales and securitization practices, and risk-management expectations. In addition, to improve consumer understanding of AMPs, the draft guidance suggests that lender communications with borrowers, including advertisements and promotional materials, be consistent with actual product terms, and that institutions avoid practices that might obscure the risks of AMPs to borrowers. When finalized, the guidance will apply to all federally regulated financial institutions. During the public comment period for the guidance, lenders and others suggested in their letters that the stricter underwriting recommendations were overly prescriptive and might put federally and state-regulated banks at a competitive disadvantage because the guidance would not apply to independent mortgage lenders or brokers. Lenders said that this could result in fewer mortgage choices for consumers. Consumer advocates questioned whether the guidance would actually help protect consumers. They noted that guidance might be difficult to enforce because it does not carry the same force as law or regulation. Federal banking regulatory officials are using these comments as they finalize the guidance. Even before drafting the guidance, federal regulatory officials had publicly reinforced their concerns about AMPs in speeches, at conferences, and through the media. According to a Federal Reserve official, these actions have raised awareness of AMP issues and reinforced the message that financial institutions and the general public need to manage risks and understand these products. Some regulatory officials have also taken agency-specific steps to address AMP lending, including reviewing high-risk lending, which would include AMPs, and improving consumer education about AMP risks. For example, FDIC officials told us that they have developed a review program to identify high-risk lending areas and evaluate risk management and underwriting approaches. NCUA officials said that they have informally contacted their largest credit unions to assess the extent of AMP lending at these institutions. OTS officials said that they have performed a review of OTS’s 68 most active AMP lenders to assess and respond to potential AMP lending risks and OCC have begun to conduct reviews of their lenders’ AMP promotional and marketing materials to assess how well they inform consumers. In response to concerns about disclosures, the Federal Reserve officials told us that they initiated a review of Regulation Z that includes reviewing the disclosures required for all mortgage loans, including AMPs, and have begun taking steps to consider disclosure revisions. During the summer of 2006, the Federal Reserve held hearings across the country on home-equity lending, AMP issues, and the adequacy of consumer disclosures for mortgage products. According to Federal Reserve officials, the Federal Reserve is currently reviewing the hearing transcripts and public comment letters to help develop plans and recommendations for revising Regulation Z. In addition, they said that they are currently revising their consumer handbook on ARM loans, known as the CHARM booklet, to include information about AMPs. Finally, in May 2006, FTC officials said that they sponsored a public workshop that explored consumer protection issues as a result of AMP growth in the mortgage marketplace and worked with federal banking regulators and other federal departments to create a brochure to assist consumers with mortgage information. State banking and financial regulatory officials from the eight states in our sample expressed concerns about AMP lending in their states; however, most relied on their existing regulatory system of licensing and examining mortgage lenders and brokers to stay abreast of and react to AMP issues. Most of the officials in our sample expressed concern about AMP lending and the negative effects it could have on consumers, including how well consumers understood complex AMP loans and the potential impact of payment shock, financial difficulties, or default and foreclosure. Other officials expressed concern about whether consumers received complete information about AMPs, saying that federal disclosures were complicated, difficult to comprehend, and often were not very useful to consumers. In addition to these general consumer protection concerns, some state officials spoke about state-specific issues. For example, Ohio officials expressed AMP concerns in the context of larger economic concerns, noting that AMP mortgages were part of wider economic challenges facing the state. Ohio already has high rates of mortgage foreclosures and unemployment that have hurt both Ohio’s consumers and its overall economy. In Nevada, officials worried that lenders and brokers have engaged in practices that sometimes take advantage of senior citizens by offering them AMP loans that they either did not need or could not afford. Most of the state regulatory officials said that they have relied upon state law to license mortgage lenders and brokers and ensure they meet minimum experience and operations standards. Most said they also periodically examine these entities for compliance with state licensing, mortgage lending, and consumer protection laws, including applicable fair advertising requirements. As such, most of the regulatory officials relied on systems already in place to investigate AMP issues or complaints and, when needed, used applicable licensing and consumer protection laws to respond to problems such as unfair and deceptive trade practices. Some state regulatory officials with whom we spoke said they have taken other actions to better understand the issues associated with AMP lending and expand consumer protections. For example, some states such as New Jersey and Nevada have gathered data on AMPs to better understand AMP lending and risks. Others, such as New York, plan to use guidance developed by regulatory associations to help oversee AMP lending by independent mortgage lenders and brokers. In summary, it is too soon to tell the extent to which payment shock will produce financial distress for some borrowers and induce defaults that would affect banks that hold AMPs in their portfolios. However, the popularity, complexity, and widespread marketing of AMPs highlight the importance of mortgage disclosures to help borrowers make informed mortgage decisions. As a result, while we commend the Federal Reserve’s efforts to review and revise Regulation Z, we recommended in our report that the Board of Governors of the Federal Reserve System consider amending federal mortgage disclosure requirements to improve the clarity and comprehensiveness of AMP disclosures. In response to our recommendation, the Federal Reserve said that it will conduct consumer testing to determine appropriate content and formats and use design consultants to develop model disclosure forms intended to better communicate information. Chairmen of the subcommittees, this completes my prepared statement. I would be pleased to respond to any questions you or other Members may have at this time. For additional information about this testimony, please contact Orice M. Williams on (202) 512-5837 or at williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Karen Tremba, Assistant Director; Tania Calhoun; Bethany Claus Widick; Stefanie Jonkman; Marc Molino; Robert Pollard; Barbara Roesmann; and Steve Ruszczyk. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Alternative mortgage products (AMPs) can make homes more affordable by allowing borrowers to defer repayment of principal or part of the interest for the first few years of the mortgage. Recent growth in AMP lending has heightened the importance of borrowers' understanding and lenders' management of AMP risks. GAO's report discusses the (1) recent trends in the AMP market, (2) potential AMP risks for borrowers and lenders, (3) extent to which mortgage disclosures discuss AMP risks, and (4) federal and selected state regulatory response to AMP risks. GAO used regulatory and industry data to analyze changes in AMP monthly payments under various scenarios; reviewed available studies; and interviewed relevant federal and state regulators and mortgage industry groups, and consumer groups. From 2003 through 2005, AMP originations, comprising mostly interest-only and payment-option adjustable-rate mortgages, grew from less than 10 percent of residential mortgage originations to about 30 percent. They were highly concentrated on the East and West Coasts, especially in California. Federally and state-regulated banks and independent mortgage lenders and brokers market AMPs, which have been used for years as a financial management tool by wealthy and financially sophisticated borrowers. In recent years, however, AMPs have been marketed as an "affordability" product to allow borrowers to purchase homes they otherwise might not be able to afford with a conventional fixed-rate mortgage. Because AMP borrowers can defer repayment of principal, and sometimes part of the interest, for several years, some may eventually face payment increases large enough to be described as "payment shock." Mortgage statistics show that lenders offered AMPs to less creditworthy and less wealthy borrowers than in the past. Some of these recent borrowers may have more difficulty refinancing or selling their homes to avoid higher monthly payments, particularly in an interest-rate environment where interest rates have risen or if the equity in their homes fell because they were making only minimum monthly payments or home values did not increase. As a result, delinquencies and defaults could rise. Federal banking regulators stated that most banks appeared to be managing their credit risk well by diversifying their portfolios or through loan sales or securitizations. However, because the monthly payments for most AMPs originated between 2003 and 2005 have not reset to cover both interest and principal, it is too soon to tell to what extent payment shocks would result in increased delinquencies or foreclosures for borrowers and in losses for banks. Regulators and others are concerned that borrowers may not be well-informed about the risks of AMPs, due to their complexity and because promotional materials by some lenders and brokers do not provide balanced information on AMPs benefits and risks. Although lenders and certain brokers are required to provide borrowers with written disclosures at loan application and closing, federal standards on these disclosures do not currently require specific information on AMPs that could better help borrowers understand key terms and risks. In December 2005, federal banking regulators issued draft interagency guidance on AMP lending that discussed prudent underwriting, portfolio and risk management, and consumer disclosure practices. Some lenders commented that the recommendations were too prescriptive and could limit consumer choices of mortgages. Consumer advocates expressed concerns about the enforceability of these recommendations because they are presented in guidance and not in regulation. State regulators GAO contacted generally relied on existing regulatory structure of licensing and examining independent mortgage lenders and brokers to oversee AMP lending. |
The UN headquarters buildings are in need of renovation. The original UN headquarters complex, located in New York City and constructed between 1949 and 1952, no longer conforms to current safety, fire and building codes and does not meet UN technology or security requirements. Over the last 50 years there have been no major renovations or upgrades to the buildings or their systems. For example, the UN headquarters complex lacks fire sprinklers, has a deteriorating window structure, and is vulnerable to catastrophic electrical failures. In September 2005, the headquarters complex was shut down and the staff sent home because a main breaker for electrical power to the top floors of the Secretariat building failed, causing it to fuse to the electrical panel. This failure could have resulted in a major fire. As host country, the United States financed construction of the original complex by providing the UN with a no-interest loan. The rest of the complex was built between 1960 and 1982 and funded through the UN’s regular budget or private donations. In December 2002, the General Assembly endorsed the CMP to renovate the UN headquarters complex and approved funds to further develop the conceptual designs and cost estimate. In May 2003, we reported that the resulting renovation planning was reasonable, but that additional management controls and oversight were needed. Since our last report, the UN has completed design development of the renovation. In April 2006, the UN appropriated $23.5 million to finance the renovation’s preconstruction phase and committed $77 million to finance the construction of temporary conference space and supplementary office space rental (swing space). However, the General Assembly has not yet decided whether to approve implementation of the CMP. In a February 2003 resolution, the General Assembly stressed the importance of oversight in implementing the CMP and requested that all relevant oversight bodies, such as OIOS, initiate immediate oversight activities. In our 2003 report on the CMP, we noted that OIOS assigned one staff member to begin researching the CMP on a part-time basis. OIOS officials also stated that they had requested funding for OIOS to hire contractors to help evaluate the CMP, project management plan, and security upgrades. In July 2005, OIOS reported that it had two auditors reviewing the CMP. OIOS’s authority spans all UN activities under the Secretary-General. OIOS derives its funding from (1) regular budget resources, which are funds from assessed contributions from member states that cover normal, recurrent activities such as the core functions of the UN Secretariat and (2) extrabudgetary resources, which come from the budgets for UN peacekeeping missions financed through assessments from member states, voluntary contributions from member states for a variety of specific projects and activities, and budgets for the voluntarily financed UN funds and programs. Our work on the Oil for Food program demonstrates the weakness inherent in OIOS’s reliance on extrabudgetary resources. OIOS audited some aspects of the Oil for Food program and identified hundreds of weaknesses and irregularities, but it lacked the resources and independence needed to provide full and effective oversight of this large, costly, and complex UN effort. As the program was implemented, the Oil for Food program was further weakened by inadequate attention to internal controls, including establishing clear responsibility and authority and identifying and addressing program risks. In addition to oversight, an effective procurement process is one of the keys to success for any large scale construction project. For more than a decade, experts have called on the UN to correct serious weaknesses in its procurement process. In addition, the UN procurement process has been under increasing stress in recent years as procurement spending has more than tripled to keep pace with a rapidly growing peacekeeping program. The UN Department of Management, through its 70-person UN Procurement Service, is ultimately responsible for developing UN procurement policies. Because the UN is a multilateral institution, our oversight authority does not directly extend to the UN, but instead extends through the United States’ membership in the organization. In recognition of this factor, we conduct UN-related work only in response to specific requests from committees with jurisdiction over UN matters. Congressional interest in this area has been high in recent years, and many of our ongoing or recently completed requests are both bicameral and bipartisan in nature. The UN is vulnerable to fraud, waste, abuse, and mismanagement due to a range of weaknesses in existing oversight practices. The General Assembly mandate creating OIOS calls for it to be operationally independent. In addition, international auditing standards state that an internal oversight activity should have sufficient resources to effectively achieve its mandate. In practice, however, OIOS’s independence is impaired by constraints that UN funding arrangements impose. In our April 2006 report concerning OIOS, we recommended that the Secretary of State and the Permanent Representative of the United States to the UN work with member states to support budgetary independence for OIOS. Both the Department of State and OIOS generally agreed with our findings and recommendation. In passing the resolution that established OIOS in August 1994, the General Assembly stated that the office should exercise operational independence and that the Secretary-General, when preparing the budget proposal for OIOS, should take into account the independence of the office. The UN mandate for OIOS was followed by a Secretary-General’s bulletin in September 1994 stating that OIOS discharge its responsibilities without any hindrance or need for prior clearance. In addition, the Institute of Internal Auditors’ (IIA) standards for the professional practice of auditing, which OIOS and its counterparts in other UN organizations formally adopted in 2002, state that audit resources should be appropriate, sufficient, and effectively deployed. These standards also state that an internal audit activity should be free from interference and that internal auditors should avoid conflicts of interest. International auditing standards also state that financial regulations and the rules of an international institution should not restrict an audit organization from fulfilling its mandate. UN funding arrangements severely limit OIOS’s ability to respond to changing circumstances and reallocate its resources among its multiple funding sources, OIOS locations worldwide, or its operating divisions— Internal Audit Divisions I and II; the Investigations Division; and the Monitoring, Evaluation, and Consulting Division—to address changing priorities. In addition, the movement of staff positions or funds between regular and extrabudgetary resources is not allowed. For example, one section in the Internal Audit Division may have exhausted its regular budget travel funds, while another section in the same division may have travel funds available that are financed by extrabudgetary peacekeeping resources. However, OIOS would breach UN financial regulations and rules if it moved resources between the two budgets. Since 1996, an increasing share of OIOSís total budget is comprised of extrabudgetary resources. OIOSís regular budget and extrabudgetary resources increased in nominal terms from $21.6 million in fiscal biennium 1996-1997 to $85.3 million in fiscal biennium 2006-2007. Over that period, OIOSís extrabudgetary funding increased in nominal terms, from about $6.5 million in fiscal biennium 1996-1997 to about $53.7 million in fiscal biennium 2006-2007. The majority of OIOS’s staff (about 69 percent) is funded with extrabudgetary resources, which increased due to extrabudgetary resources for audits and investigations of peacekeeping operations, including issues related to sexual exploitation and abuse. OIOS is dependent on UN funds and programs and other UN entities for resources, access, and reimbursement for the services it provides. These relationships present a conflict of interest because OIOS has oversight authority over these entities, yet it must obtain their permission to examine their operations and receive payment for its services. OIOS negotiates the terms of work and payment for services with the manager of the program it intends to examine, and heads of these entities have the right to deny funding for oversight work proposed by OIOS. By denying OIOS funding, UN entities could avoid OIOS audits or investigations, and high-risk areas could potentially be excluded from timely examination. For example, the practice of allowing the heads of programs the right to deny funding to internal audit activities prevented OIOS from examining high-risk areas in the UN Oil for Food program, where billions of dollars were subsequently found to have been misused. Moreover, in some cases, fund and program managers have disputed fees charged by OIOS for investigative services rendered. For example, 40 percent of the $2 million billed by OIOS after it completed its work is currently in dispute, and since 2001, less than half of the entities have paid OIOS in full for investigative services it has provided. According to OIOS officials, the office has no authority to enforce payment for services rendered, and there is no appeal process, no supporting administrative structure, and no adverse impact on an agency that does not pay or pays only a portion of the bill. In our April 2006 report concerning OIOS, we recommended that the Secretary of State and the Permanent Representative of the United States to the UN work with member states to support budgetary independence for OIOS. In commenting on the official draft of that report, OIOS and the Department of State (State) agreed with our overall conclusions and recommendations. OIOS stated that observations made in our report were consistent with OIOS’s internal assessments and external peer reviews. State fully agreed with GAO’s findings that UN member states need to ensure that OIOS has budgetary independence. However, State does not believe that multiple funding sources have impeded OIOS’s budgetary flexibility. We found that current UN financial regulations and rules are very restrictive, severely limiting OIOS’s ability to respond to changing circumstances and to reallocate funds to emerging or high priority areas when they arise. OIOS formally adopted the Institute for Internal Auditors’ (IIA) international standards for the professional practice of internal auditing in 2002. Since then, OIOS has begun to develop and implement the key components of effective oversight. However, the office has yet to fully implement them. Moreover, shortcomings in meeting key components of international auditing standards could undermine the office’s effectiveness in carrying out its functions as the UN’s main internal oversight body. Effective oversight demands reasonable adherence to professional auditing standards. In our April 2006 report on OIOS, we also recommended that the Secretary of State and the Permanent Representative of the United States to the UN work with member states to support OIOS’s efforts to more closely adhere to international auditing standards. OIOS has adopted a risk management framework to link the office’s annual work plans to risk-based priorities, but it has not fully implemented this framework. OIOS began implementing a risk management framework in 2001 for prioritizing the allocation of resources to those areas that have the greatest exposure to fraud, waste, and abuse. OIOS’s risk management framework includes plans for organization-wide risk assessments to categorize and prioritize risks facing the organization; it also includes client-level risk assessments to identify and prioritize risk areas facing each entity for which OIOS has oversight authority. Although OIOS’s framework includes plans to perform client-level risk assessments, as of April 2006, out of 25 entities that comprise major elements of its “oversight universe,” only three risk assessments had been completed. As a result, OIOS officials cannot currently provide reasonable assurance that the entities they choose to examine are those that pose the highest risk, nor that their audit coverage of a client focuses on the areas of risk facing that client. OIOS officials told us they plan to assign risk areas more consistently to audits proposed in their annual work plan during the planning phase so that, by 2008, at least 50 percent of their work is based on a systematic risk assessment. Although OIOS’s annual reports contain references to risks facing OIOS and the UN organization, the reports do not provide an overall assessment of the status of these risks or the consequence to the organization if the risks are not addressed. For instance, in February 2005, the Independent Inquiry Committee reported that many of the Oil for Food program’s deficiencies, identified through OIOS audits, were not described in the OIOS annual reports submitted to the General Assembly. A senior OIOS official told us that the office does not have an annual report to assess risks and controls and that such an assessment does not belong in OIOS’s annual report in its current form, which focuses largely on the activities of OIOS. The official agreed that OIOS should communicate to senior management on areas where the office has not been able to examine significant risk and control issues, but that the General Assembly would have to determine the appropriate vehicle for such a new reporting requirement. While OIOS officials have stated that the office does not have adequate resources, they do not have a mechanism in place to determine appropriate staffing levels to help justify budget requests, except for peacekeeping oversight services. For peacekeeping audit services, OIOS does have a metric—endorsed by the General Assembly—that provides one professional auditor for every $100 million in the annual peacekeeping budget. Although OIOS has succeeded in justifying increases for peacekeeping oversight services consistent with the large increase in the peacekeeping budget since 1994, it has been difficult to support staff increases in oversight areas that lack a comparable metric, according to OIOS officials. For the CMP, OIOS reported that it had extrabudgetary funds from the CMP for one auditor on a short-term basis, but that the level of funding was not sufficient to provide the oversight coverage intended by the General Assembly. To provide additional oversight coverage, OIOS assigned an additional auditor exclusively to the CMP, using funds from its regular budget. OIOS staff have opportunities for training and other professional development, but OIOS does not formally require or systematically track staff training to provide reasonable assurance that all staff are maintaining and acquiring professional skills. UN personnel records show that OIOS staff took a total of more than 400 training courses offered by the Office of Human Resources Management in 2005. Further, an OIOS official said that, since 2004, OIOS has subscribed to IIA’s online training service that offers more than 100 courses applicable to auditors. Although OIOS provides these professional development opportunities, it does not formally require staff training, nor does it systematically track training to provide reasonable assurance that all staff are maintaining and acquiring professional skills. OIOS policy manuals list no minimum training requirement. OIOS officials said that, although they gather some information on their use of training funds for their annual training report to the UN Office of Human Resources Management, they do not maintain an office wide database to systematically track all training taken by their staff. In our April 2006 report on OIOS, we also recommended that the Secretary of State and the Permanent Representative of the United States to the UN work with member states to support OIOS’s efforts to more closely adhere to international auditing standards. In commenting on the official draft of that report, OIOS and State agreed with our overall conclusions and recommendations. OIOS stated that observations made in our report were consistent with OIOS’s internal assessments and external peer reviews. While the UN has yet to finalize its CMP procurement strategy, to the extent that it relies on current UN processes, implementation of the planned renovation is vulnerable to the procurement weaknesses that we have identified. We have identified problems affecting the UN’s bid protest process, headquarters contract review committee, vender rosters, procurement workforce, ethics guidance for procurement personnel, and procurement manual. In our April 2006 report on UN procurement, we recommended that the Secretary of State and the Permanent Representative of the United States to the UN work with member states to encourage the UN to establish an independent bid protest mechanism, address problems facing its principal contract-review committee, implement a process to help ensure that it conducts business with qualified vendors, and to take other steps to improve UN procurement. The UN has not established an independent process to consider vendor protests, despite the 1994 recommendation of a high-level panel of international procurement experts that it do so as soon as possible. Such a process would provide reasonable assurance that vendors are treated fairly when bidding and would also help alert senior UN management to situations involving questions about UN compliance. An independent bid protest process is a widely endorsed control mechanism that permits vendors to file complaints with an office or official who is independent of the procurement process. The General Assembly endorsed the principle of independent bid protests in 1994 when it recommended for adoption by member states a model procurement law drafted by the UN Commission on International Trade Law. Several nations, including the United States, provide vendors with an independent process to handle complaints. The UN’s lack of an independent bid protest process limits the transparency of its procurement process by not providing a means for a vendor to protest the outcome of a contract decision to an independent official or office. At present, the UN Procurement Service directs its vendors to file protests to the Procurement Service chief and then to his or her immediate supervisor. If handled through an independent process, vendor complaints could alert senior UN officials and UN auditors to the failure of UN procurement staff to comply with stated procedures. As a result of recent findings of impropriety involving the Procurement Service, the United Nations hired a consultant to evaluate the internal controls of its procurement operations. One of the consultant’s conclusions was that the UN needs to establish an independent bid protest process for suspected wrongdoing that would include an independent third-party evaluation as well as arbitration, due process, and formal resolution for all reports. While UN procurement has increased sharply in recent years, the size of the UN‘s Headquarters Committee on Contracts and its support staff has remained relatively stable. The committee’s chairman and members told us that the committee does not have the resources to keep up with its expanding workload. The number of contracts reviewed by the committee has increased by almost 60 percent since 2003. The committee members stated that the committee’s increasing workload was the result in part of the complexity of many new contracts and increased scrutiny of proposals in response to recent UN procurement scandals. The committee is charged with evaluating proposed contracts worth more than $200,000 and advising the Department of Management as to whether the contracts are in accordance with UN Financial Regulations and Rules and other UN policies. However, concerns regarding the committee’s structure and workload have led UN auditors to conclude that the committee cannot properly review contract proposals. It may thus recommend contracts for approval that are inappropriate and have not met UN regulations. Earlier this year, OIOS reiterated its 2001 recommendation that the UN reduce the committee’s caseload and restructure the committee “to allow competent review of the cases.” The UN does not consistently implement its process for helping to ensure that it is conducting business with qualified vendors. As a result, the UN may be vulnerable to favoring certain vendors or dealing with unqualified vendors. The UN has long had difficulties in maintaining effective rosters of qualified vendors. In 1994, a high-level group of international procurement experts concluded that the UN’s vendor roster was outdated, inaccurate, and inconsistent across all locations. In 2003, an OIOS report found that the Procurement Service’s roster contained questionable vendors. OIOS later concluded that as of 2005 the roster was not fully reliable for identifying qualified vendors that could bid on contracts. While the Procurement Service became a partner in an interagency procurement vendor roster in 2004 to address these concerns, OIOS has found that many vendors that have applied through the interagency procurement vendor roster have not submitted additional documents requested by the Procurement Service to become accredited vendors. The UN has not demonstrated a commitment to improving its professional procurement staff through training, establishment of a career development path, and other key human capital practices critical to attracting, developing, and retaining a qualified professional workforce. Due to significant control weaknesses in the UN’s procurement process, the UN has relied disproportionately on the actions of its staff to safeguard its resources. Recent studies indicate that Procurement Service staff lack knowledge of UN procurement policies. Moreover, most procurement staff lack professional certifications attesting to their procurement education, training, and experience. The UN has not established requirements for procurement staff to obtain continuous training, resulting in inconsistent levels of training across the procurement workforce. Furthermore, UN officials acknowledged that the UN has not committed sufficient resources to a comprehensive training and certification program for its procurement staff. In addition, the UN has not established a career path for professional advancement for procurement staff. Doing so could encourage staff to undertake progressive training and work experiences. The UN has been considering the development of specific ethics guidance for procurement officers for almost a decade, in response to General Assembly directives dating back to 1998. While the Procurement Service has drafted such guidance, the UN has made only limited progress toward adopting it. Such guidance would include a declaration of ethics responsibilities for procurement staff and a code of conduct for vendors. The UN has yet to include guidance for construction procurement into its procurement manual. In June 2005, a UN consultant recommended that the UN develop separate guidelines in the manual for the planning and execution of construction projects. These guidelines could be useful in planning and executing CMP procurement. Moreover, the UN has not updated its procurement manual since January 2004 to reflect current UN procurement policy. As a result, procurement staff may not be aware of changes to procurement procedures that the UN has adopted over the past 2 years. A Procurement Service official who helped revise the manual in 2004 stated that the Procurement Service has been unable to allocate resources needed to update the manual since that time. In our April 2006 report on UN procurement, we recommended that the Secretary of State and the Permanent Representative of the United States to the UN work with member states to encourage the UN to establish an independent bid protest mechanism, address problems facing its principle contract-review committee, implement a process to help ensure that it conducts business with qualified vendors, and to take other steps to improve UN procurement. In commenting on the official draft of this report, the Department of State stated that it welcomed our report and endorsed its recommendations. The UN did not provide us with written comments. To conduct our study of UN oversight, we reviewed relevant UN and OIOS reports, manuals, and program documents, as well as the international auditing standards of the IIA and the International Organization of Supreme Auditing Institutions. The IIA standards apply to internal audit activities— not to investigations, monitoring, evaluation, and inspection activities. However, we applied these standards OIOS-wide, as appropriate, in the absence of international standards for non-audit oversight activities. We met with senior State officials in Washington, D.C., and senior officials with the U.S. Missions to the UN in New York, Vienna, and Geneva. At these locations, we also met with the UN Office of Internal Oversight Services management officials and staff; representatives of Secretariat departments and offices as well as the UN funds, programs, and specialized agencies; and the UN external auditors—the Board of Auditors (in New York) and the Joint Inspection Unit (in Geneva). We reviewed relevant OIOS program documents, manuals, and reports. To assess the reliability of OIOS’s funding and staffing data, we reviewed the office’s budget documents and discussed the data with relevant officials. We determined the data were sufficiently reliable for the purposes of this testimony. To assess internal controls in the UN procurement process, we used an internal control framework that is widely accepted in the international audit community and has been adopted by leading accountability organizations. We assessed the UN’s control environment for procurement, as well as its control activities, risk assessment process, procurement information processes, and monitoring systems. In doing so, we reviewed documents and information prepared by OIOS, the UN Board of Auditors, the UN Joint Inspection Unit, two consulting firms, the Department of Management’s Procurement Service, the Department of Peacekeeping Operations, and State. We interviewed UN and State officials and conducted structured interviews with the principal procurement officers at each of 19 UN field missions. If implemented, the CMP will be a large and unique endeavor for the UN. Effective internal oversight and management of the procurement process will be necessary for the successful completion of the project. However, weaknesses in internal oversight and procurement could impact implementation of the CMP. Recent UN scandals, particularly in the Oil for Food program, demonstrate the need for significant reforms in these areas. Although OIOS has a mandate establishing it as an independent oversight entity and to conduct oversight of the CMP, the office does not have the budgetary independence it requires to carry out its responsibilities effectively. In addition, OIOS’s shortcomings in meeting key components of international auditing standards could undermine the office’s effectiveness in carrying out its functions as the UN’s main internal oversight body. Effective oversight demands reasonable budgetary independence, sufficient resources, and adherence to professional auditing standards. OIOS is now at a critical point, particularly given the initiatives to strengthen UN oversight launched as a result of the World Summit in the fall of 2005. In moving forward, the degree to which the UN and OIOS embrace international auditing standards and practices will demonstrate their commitment to addressing the monumental management and oversight tasks that lie ahead. Failure to address these long-standing concerns would diminish the efficacy and impact of other management reforms to strengthen oversight at the UN. While the UN has yet to finalize its CMP procurement strategy, to the extent that it relies on the current process, we have identified numerous weaknesses with the existing procurement process that could impact implementation of the CMP. Long-standing weaknesses in the UN’s procurement office have left UN procurement funds highly vulnerable to fraud, waste, and abuse. Many of these weaknesses have been identified and documented by outside experts and the UN’s own auditors for more than a decade. Sustained leadership at the UN will be needed to correct these weaknesses and establish a procurement system capable of fully supporting the UN’s expanding needs. This concludes my testimony. I would be pleased to take your questions. Should you have any questions about this testimony, please contact me at (202) 512-9601 or melitot@gao.gov. Other major contributors to this testimony were Phyllis Anderson and Maria Edelstein, Assistant Directors; Joy Labez, Pierre Toureille, Valérie L. Nowak, Jeffrey Baldwin-Bott, Michaela Brown, Joseph Carney, Debbie J. Chung, Kristy Kennedy, Clarette Kim, J.J. Marzullo, and Barbara Shields. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The UN headquarters buildings are in need of renovation. The Capital Master Plan is an opportunity for the organization to renovate its headquarters buildings and ensure conformity with current safety, fire, and security requirements. Estimated by the UN to cost about $1.6 billion, the renovation will require a substantial management effort by the UN--including the use of effective internal oversight and procurement practices. Based on recently issued work, GAO (1) examined the extent to which UN funding arrangements for its Office of Internal Oversight Services (OIOS) ensure independent oversight and the consistency of OIOS's practices with key international auditing standards and (2) assessed the UN's procurement processes according to key standards for internal controls. The effective implementation of the planned UN renovation is vulnerable due to a range of weaknesses in existing internal oversight and procurement practices. In particular, UN funding arrangements adversely affect OIOS's budgetary independence and compromise OIOS's ability to investigate high-risk areas. In addition, while the UN has yet to finalize a specific procurement process for the UN Capital Master Plan, to the extent that it relies on UN procurement processes, it remains vulnerable to the numerous procurement weaknesses that GAO have previously identified. First, UN funding arrangements constrain OIOS's ability to operate independently as mandated by the General Assembly and required by international auditing standards. While OIOS is funded by a regular budget and 12 other revenue streams, UN financial regulations and rules severely limit OIOS's ability to respond to changing circumstances and reallocate resources among revenue streams, locations, and operating divisions. Thus, OIOS cannot always direct resources to high-risk areas that may emerge after its budget is approved. Second, OIOS depends on the resources of the funds, programs, and other entities it audits. The managers of these programs can deny OIOS permission to perform work or not pay OIOS for services. UN entities could thus avoid OIOS audits or investigations, and high-risk areas can be and have been excluded from timely examination. OIOS has begun to implement key measures for effective oversight, but some of its practices fall short of the applicable international auditing standards it has adopted. OIOS develops an annual work plan, but the risk management framework on which the work plans are based is not fully implemented. OIOS officials report the office does not have adequate resources, but they also lack a mechanism to determine appropriate staffing levels. Furthermore, OIOS has no mandatory training curriculum for staff. While the UN has yet to finalize its Capital Master Plan procurement strategy, to the extent that it relies on the current process, implementation of the Capital Master Plan remains vulnerable to numerous procurement weaknesses. For example, the UN has not established an independent process to consider vendor protests that could alert senior UN officials of failures by procurement staff to comply with stated procedures. Also, the chairman of the UN procurement contract review committee has stated that his committee does not have the resources to keep up with its expanding workload. In addition, the UN does not consistently implement its process for helping to ensure that it is conducting business with qualified vendors. GAO also found that the UN has not demonstrated a commitment to improving its professional procurement staff despite long-standing shortcomings and has yet to complete action on specific ethics guidance for procurement officers. |
Located within the Department of Commerce, USPTO administers U.S. patent and trademark laws while ensuring the creation of valid, prompt, and proper intellectual property rights. According to the Strategic Plan, USPTO’s mission is to ensure that the intellectual property system contributes to a strong global economy, encourages investment in innovation, fosters entrepreneurial spirit, and enhances the quality of life. USPTO also advises the administration on all domestic and global aspects of intellectual property. USPTO management consults with a Patent Public Advisory Committee and a Trademark Public Advisory Committee. These committees are comprised of voting members from the private sector and non-voting members from the three unions represented at USPTO—the Patent Office Professional Association and two chapters of the National Treasury Employees Union. The committees not only review USPTO policies, goals, performance, budget, and user fees related to patents and trademarks, but also issue annual reports to the President, the Secretary of Commerce, and the House and Senate Committees on the Judiciary. Fees and volume of patent activity are different for small and large entities. Small entities receive a 50 percent discount on many patent fees. The majority of patent applicants are large entities filing applications for utility patents. USPTO has estimated that in recent years patent applications from large entities have comprised over 60 percent of all patent applications received; small entities have accounted for the remainder. In fiscal year 2001, utility patents represented over 90 percent of all patents granted that year. The number of patent applications filed nearly doubled during fiscal years 1990 through 2001, increasing from about 164,000 to about 326,000, and USPTO’s Corporate, Business, and Strategic Plans projected that the number of applications would increase to between 351,000 and 368,000 in fiscal year 2002. Moreover, each plan projects that the number of applications will increase in the future—10 percent annually under the Corporate and Business Plans and 5 percent annually for fiscal years 2003 and 2004 and 7 percent annually for fiscal years 2005 through 2007 under the Strategic Plan. The Corporate Plan projected that the number of applications would increase to about 539,000 in fiscal year 2006; the Business and Strategic Plans project that the number of applications filed will increase in fiscal year 2007 to about 593,000 and 454,000, respectively. The lower projection under the Strategic Plan reflects the reduced number of applications expected for fiscal years 2002 and 2003 due, in part, to a slowdown in the economy. For fiscal year 2002, the Business Plan assumed an application growth rate of about 12 percent and the Strategic Plan assumed a growth rate of 3 percent; for fiscal year 2003, the growth rate projected by the Business and Strategic Plans were 10 percent and 5 percent, respectively. The application growth rate is a key factor in projecting business indicators, such as pendency, staffing needs, and funding requirements. For example, if the number of applications decreases, the number of examiners needed to process those applications decreases. (See app. I, p. 20.) The number of patents granted increased by over 90 percent during fiscal years 1990 through 2001, increasing from about 90,000 to about 171,000, and USPTO’s three plans projected that the number would increase to a range of about 167,000 to 171,000 in fiscal year 2002. Furthermore, the three plans project that the number of patents granted will increase in the future. The Corporate Plan projected that the number of patents granted would increase to about 192,000 by fiscal year 2006, and the Business and Strategic Plans project that the number of patents granted will increase in fiscal year 2007 to about 314,000 and 374,000, respectively. (See app. I, p. 21.) USPTO’s inventory of unprocessed patent applications increased by nearly 250 percent from fiscal year 1990 to 2001, increasing from about 96,000 to about 332,000, and USPTO’s three plans projected that the inventory would increase to between 393,000 and 512,000 in fiscal year 2002. The Corporate and Business Plans also project increases in the future, while the Strategic Plan projects a decrease. The Corporate Plan projected that the application inventory would increase to almost 1.3 million by the end of fiscal year 2006, and the Business Plan projects that the inventory would increase to about 584,000 through fiscal year 2007. The Strategic Plan, which would speed up some of the proposed changes in the Business Plan and make other fundamental changes, projects that the inventory will decrease to about 144,000 through fiscal year 2007. The decrease projected in the Strategic Plan reflects several changes in assumptions, including fewer new patent applications. (See app. I, p. 22.) Patent pendency increased from 18.3 months to 24.7 months between fiscal years 1990 and 2001. Projections of patent pendency beyond fiscal year 2001 vary widely under USPTO’s three plans. USPTO’s three plans projected that pendency would increase to between 26.1 months and 26.7 months in fiscal year 2002. The Corporate Plan projected that pendency would be 38.6 months in fiscal year 2006, and the Business and Strategic Plans project it will be 25.5 months and 20.3 months, respectively, in fiscal year 2007. According to USPTO officials, pendency time in the Strategic Plan reflects a proposed fundamental redesign of the patent search and examination system. (See app. I, p. 23.) The number of patent examiners on board at the end of the fiscal year increased from 1,699 to 3,061, or about 80 percent, from fiscal year 1990 to 2001. During this period, USPTO annually hired an average of 380 new examiners and lost an average of 236 examiners through attrition. Further, USPTO’s Business and Strategic Plans projected that the number of examiners on board at the end of fiscal year 2002 would be 3,435 and 3,595, respectively. Moreover, both plans project increases in the number of examiners through fiscal year 2007—to 5,735 in the Business Plan and to 4,322 in the Strategic Plan. (See app. I, pp. 24-26.) Between fiscal years 1999 and 2001, fee collections increased from $887 million to $1.085 billion and funding requirements (USPTO’s appropriations) increased from $781 million to $1.039 billion. For fiscal year 2002, the Business and Strategic Plans projected fee collections of $1.373 billion (includes $27 million for employee pension and annuitant health benefits proposed by the President) and $1.198 billion, respectively, and both plans projected that funding requirements would be $1.128 billion. Further, fee collections and funding requirements are projected to increase in the future under both plans, but at different rates. Under the Business Plan, fee collections are projected to increase from $1.527 billion in fiscal year 2003 to $2.078 billion in fiscal year 2007, and funding requirements are projected to increase from $1.365 billion to $2.078 billion during the same time period. Under the Strategic Plan, fee collections are projected to increase from $1.527 billion in fiscal year 2003 to $1.823 billion in fiscal year 2007, and funding requirements are projected to increase from $1.365 billion to $1.823 billion during that period. (See app. I, pp. 27-28.) There are a number of differences between USPTO’s Business and Strategic Plans, as shown in the following examples. (See app. I, p. 29.) The patent pendency definition is different under each plan. Under the Business Plan, pendency is measured from the date an application is filed. However, under the Strategic Plan, pendency would be measured from the date an applicant pays the examination fee. According to USPTO officials, this definition is different than the definition under the Business Plan because of the proposed fundamental redesign of the patent search and examination system. Also, according to the Strategic Plan, this definition is the same measure—the examination duration period—used by the European Patent Office and the Japan Patent Office. This change in definition is partly responsible for the reduction in pendency under the Strategic Plan. Historically, applicants have paid a single fee that covered filing and examination; under the Strategic Plan there would be separate filing and examination fees. The applicant has two options for paying fees under the Strategic Plan. Under the first option, applicants may elect to pay the patent application fee and examination fee at the same time. Under the second option, applicants may elect to pay the application fee and defer examination and paying the examination fee for up to 18 months. According to USPTO, applicants that take advantage of the deferred examination do so for various reasons, such as to decide the merits of pursuing the patent or to avoid the early expenditure of funds. USPTO estimates that about 9 percent of all applicants will defer examination. The Strategic Plan redefines patent pendency as the examination duration period. As a result, under the first option the pendency measure is the same as under the Business Plan—it begins from the date the patent application is filed. However, under the second option pendency begins when the examination fee is paid. Table 1 shows USPTO’s projections of patent pendency under three scenarios using different assumptions. The first scenario shows the Business Plan’s pendency projections. The second scenario is based upon the Strategic Plan where an applicant pays the filing fee and examination fee at the same time, thus seeking immediate examination. The third scenario is based on the Strategic Plan where an applicant pays the filing fee and then defers examination. Regarding the third scenario, the Strategic Plan notes that to determine the average total pendency under the Strategic Plan (from the date an application is filed to issue of a patent or abandonment of the application), 9 months should be added to the plan’s calculation to reflect the estimated average examination deferral period. According to USPTO officials, fewer months should be added in the early years. Table 1 shows for the third scenario that when the deferral time is added, average pendency from filing until patents are granted or applications are abandoned would be longer under the Strategic Plan than under the Business Plan for those applicants who elect to defer examination of their applications. USPTO noted that the fiscal year 2008 difference between the 18 months in the second scenario and the 27 months in the third scenario is a measure of deferred examination. Patent examiners’ responsibility for the search function on most domestic applications would also be eliminated under the Strategic Plan. Instead, with the exception of a new class of applicant—the micro-entity— applicants would arrange for such searches by private organizations, foreign patent offices, or others; USPTO would continue to do searches for micro-entities. This change would allow examiners more time to focus on the examination function. USPTO assumes under the Strategic Plan that a portion of the patent examiners’ time will be refocused from non- examination to examination functions. USPTO officials told us that most of the refocused time would result from eliminating the search function. The detailed action plans supporting the Strategic Plan show that eliminating the search function would increase examiners’ productivity between 5 and 20 percent. Almost 2,100 more new patent examiners would be hired under the Business Plan than under the Strategic Plan—4,750 versus 2,688. This difference reflects revised assumptions about new hires and the number of examiners expected to leave. Under the Business Plan, USPTO expects to hire 950 examiners and assumes a 10 percent attrition rate each year during fiscal years 2003 through 2007. Under the Strategic Plan, USPTO expects to hire 750 examiners annually for fiscal years 2003 and 2004 and 396 examiners annually for fiscal years 2005 through 2007. USPTO assumes 11 percent and 8 percent attrition rates for fiscal years 2003 and 2004, respectively, and 9 percent attrition annually for fiscal years 2005 through 2007. Fewer examiners would be required under the Strategic Plan because fewer new applications are anticipated and examiners would no longer be required to do the search function for most patent applications. Patent fee restructuring would be implemented in fiscal year 2004 under the Business Plan, and by October 1, 2002, under the Strategic Plan. There would be a one-time surcharge of 19.3 percent on patents and 10.3 percent on trademarks in fiscal year 2003 under the Business Plan, but no surcharge under the Strategic Plan. USPTO officials told us that the restructured fees would need to be put in place earlier than proposed under the Business Plan to compensate for the elimination of the one-time surcharge and the expected decrease in patent applications, and to implement changes proposed to improve quality and reduce pendency. Fee collections and funding requirements projected for fiscal year 2003 in the Business Plan would be the same in the Strategic Plan—$1.527 billion in fee collections and $1.365 billion in funding requirements—but the specifics would change. For example, the Strategic Plan’s patent-funding requirements would increase by about $27 million and trademark-funding requirements would decrease by the same amount. Furthermore, projected funding requirements for fiscal years 2003 through 2007 would total about $539 million less under the Strategic Plan than under the Business Plan— $8.396 billion versus $8.935 billion—as a result of changing assumptions, such as fewer patent applications filed and fewer patent examiners needed. For fiscal years 2004 through 2007, the Business and Strategic Plans both predict that fee collections and funding requirements will equal each other. There would be some significant changes in the patent fee structure under legislation proposed on June 20, 2002. While the proposed filing fee under the Strategic Plan would be lower than the current filing fee, a new examination fee would be added and other fees would be higher. In addition, some new fees would be established for such things as surcharges authorized by the USPTO Director in certain instances. For example, a surcharge could be charged for any patent application whose specification and drawings exceed 50 sheets of paper. (See app. I, p. 30.) Generally, large entities would pay higher fees under the proposed legislation. The current fee structure provides that large entities pay a $740 patent filing fee that covers both the filing and examination of the patent application. While the proposed legislation would have large entities pay a $300 patent filing fee, it would also require applicants that request examination (assumed by USPTO to be 90 percent of the large entity applicants) to pay an additional $1,250 examination fee. Patent issue fees would also be higher under the proposed legislation. Consequently, a large entity that receives a utility patent would incur a fee increase of nearly $1,200, or about 59 percent, over current fees. Furthermore, the three fees to maintain the patent through its useful life would be higher. If a large entity maintains the patent through the payment of the three maintenance fees, the total fee increase resulting from the proposed legislation would be nearly $4,100, or about a 51 percent increase over current fees. (See app. I, p. 31.) Small entities also would pay increased fees under the proposed legislation, as shown in table 2. Instead of paying a $370 patent filing fee (50 percent of the $740 fee for large entities) that covers both the filing and examination of the patent application under the current fee structure, small entities would pay a $150 patent filing fee (50 percent of the new $300 fee) under the proposed legislation. However, small entities that request examination (assumed by USPTO to be 90 percent of the small entity applicants) also would have to pay the new $1,250 examination fee; with the exception of the new “micro-entity” category, small entities would not get a discount on the new examination fee. In addition, issue fees for small entities are also higher. As a result, a small entity that receives a utility patent would incur a fee increase of over $1,200, or about 121 percent, over current fees. Furthermore, because maintenance fees are higher, if a small entity maintains the patent through the payment of the three maintenance fees, the total fee increase resulting from the proposed legislation would be nearly $2,700, or about a 67 percent increase over current fees. We provided a copy of our draft report to USPTO for review and comment. USPTO responded that the factual information in our draft report provides a good picture of USPTO’s transition to its new Strategic Plan. USPTO added that the Strategic Plan is USPTO’s road map for creating, over the next 5 years, an agile and productive organization fully worthy of the unique leadership role the American intellectual property system plays in the global economy. In addition, USPTO provided technical clarifications and corrections to our draft report, which we incorporated as appropriate. USPTO’s comments are presented in appendix II. To provide information on past and future USPTO operations, including information on the number of patent applications filed, patents granted, inventory of patent applications, patent pendency, patent examiner staffing, and fee collections and funding requirements, we reviewed key USPTO documents, such as its April 2001 Corporate Plan, February 2002 Business Plan, and June 2002 Strategic Plan. We also reviewed various budget documents, performance and accountability reports, planning and other internal documents, and historical data provided by the agency. In addition, we interviewed USPTO senior management and other officials, as well as representatives of the Patent Public Advisory Committee and the Patent Office Professional Association. Recognizing that a detailed examination of the Strategic Plan would be premature until congressional action is taken on the fee legislation proposal and USPTO’s fiscal year 2003 budget request, we agreed to identify some of the differences between the Business and Strategic Plans. We compared selected aspects of those plans, including key assumptions and proposed operating changes. We also discussed with USPTO officials how USPTO develops projections of key business indicators, such as pendency and funding requirements. For example, we obtained information about USPTO’s Patent Production Model, which is a computer-based system that estimates staffing needs, production, pendency, and other key business indicators for managerial decisionmaking. To determine how the current patent-fee structure would change under the proposed fee legislation, we compared current fees with the June 20, 2002, fee legislation proposal. We obtained USPTO officials’ views on the accuracy of our analysis. In addition, we reviewed the results of published analyses of the fee proposal by others, including the American Intellectual Property Law Association and the Intellectual Property Owners Association. Although we did not independently verify the data provided by USPTO, to the extent feasible we corroborated it with other agency sources. We performed our work from April 2002 through July 2002 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to appropriate House and Senate Committees; the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office; the Chief Financial Officer and Chief Administrative Officer, USPTO; the Secretary of Commerce; and the Director, Office of Management and Budget. This letter will also be available on GAO’s home page at http://www.gao.gov. If you or your staffs have any questions concerning this report, please call me on (202) 512-6225. Key contributors to this report included John P. Hunt, Jr., Byron S. Galloway, and Don Pless. This appendix contains the information used to brief the staff of Representative Lamar Smith on July 12, 2002, and the staff of the Chairman of the Joint Economic Committee on July 25, 2002. U.S. Patent and Trademark Office: Information on Past and Future Operations July 25, 2002 U.S. Patent and Trademark Office (USPTO) Background (con’t.) Objectives, Scope, and Methodology (con’t.) Patent Applications Filed (Fiscal years 1990-2001) and Corporate, Business, and Strategic Plan Projections (Fiscal years 2002-2007) Patents Granted (Fiscal years 1990-2001) and Corporate, Business, and Strategic Plan Projections (Fiscal years 2002-2007) End-of-Year Patent Inventory (Fiscal years 1990-2001) and Corporate, Business, and Strategic Plan Projections (Fiscal years 2002-2007) Total Patent Pendency (Fiscal years 1990-2001) and Corporate, Business, and Strategic Plan Projections (Fiscal years 2002-2007) Employment of Patent Examiners (Fiscal years 1990-2001) and Business and Strategic Plan Projections (Fiscal years 2002-2007) Patent Examiners Hired (Fiscal years 1990-2001) and Business and Strategic Plan Projections (Fiscal years 2002-2007) Examiners Who Left (Fiscal years 1990-2001) and Business and Strategic Plan Projections (Fiscal years 2002-2007) Fee Collections and Funding Requirements (Fiscal years 1999-2001) and Business Plan Projections (Fiscal years 2002-2007) Fee Collections and Funding Requirements (Fiscal years 1999-2001) and Strategic Plan Projections (Fiscal years 2002-2007) compensate for the eliminated FY 2003 patent fee surcharge and the expected decrease in patent applications. New examination fee would be added; 50 percent small entity discount would not apply to examination fee. Issue fee would be higher. The combined filing, examination, and issue fees would be higher. All three maintenance fees would be higher. New fees would be created, such as a surcharge authorized by the USPTO Director in certain instances. A new “micro-entity” category would be created; with a discount on the examination fee to be prescribed by the USPTO Director. Utility patents include chemical, electrical, and mechanical applications. In fiscal year 2001, utility patents represented over 90 percent of all patents granted that year. How Patent Fee Structure Would Change Under Proposed Legislation (con’t.) | The U.S. Patent and Trademark Office (USPTO) has a staff of 6,426 and collected $1.1 billion in patent and trademark fees in fiscal year 2001. As the U.S. economy depends increasingly on new innovations, the need to patent or trademark quickly the intellectual property resulting from such innovations becomes more important. Expressing concerns about USPTO's plans for the future, Congress directed USPTO to develop a 5-year plan. In February 2001, USPTO issued its first 5-year plan, called the USPTO Business Plan. Because the Director of USPTO did not believe that the Business Plan went far enough, in June 2002, USPTO produced another 5-year plan, called the 21st Century Strategic Plan. GAO found that patent activity grew substantially from 1990 through 2001. The numbers of patent applications filed and patents granted nearly doubled; the inventory of patent applications nearly tripled; patent pendency increased from slightly over 18 months to nearly 25 months, and the number of patent examiners increased by about 80 percent. Furthermore, in fiscal year 2001, both fee collections and agency funding requirements exceeded $1 billion for the first time in the agency's history. Although both 5-year plans cover the same period, the assumptions and projected results of the Business Plan are different in several ways from the Strategic Plan. The administration's recent legislative proposal to restructure patent fees to implement the Strategic Plan would result in higher fees for the majority of patent applications--large entities--that receive utility patents and maintain such patents in to the future. Consequently, total fees for these applicants would increase by $4,100 or 51 percent. Also, total fees for most small entities would increase $2,700 or 67 percent over current fees. |
When AFRICOM was designated fully operational on September 30, 2008, it consolidated the responsibility for DOD activities in Africa that had previously been shared by the U.S. Central, European, and Pacific Commands. AFRICOM’s area of responsibility includes the countries on the African continent, with the exception of Egypt, as well as its island nations. The command’s mission is to work in concert with other U.S. government agencies and international partners to conduct sustained security engagement through military-to-military programs, military- sponsored activities, and other military operations as directed to promote a stable and secure African environment in support of U.S. foreign policy. According to AFRICOM, it received about $340 million in funding in fiscal year 2009. In addition to AFRICOM’s headquarters, the command is supported by military service component commands, a special operations command, and a Horn of Africa task force (see fig. 1). AFRICOM’s Navy Forces and Marine Corps components were designated fully operational on October 1, 2008, and its Air Force, Army, and special operations command components on October 1, 2009. The task force was transferred to AFRICOM on October 1, 2008. All components have begun carrying out activities under AFRICOM. As of June 2010, AFRICOM reported that the command and its components had about 4,400 assigned personnel and forces. About 2,400 of these personnel were based at locations in Europe, and about 2,000 personnel—about 400 staff and about 1,600 forces—were assigned to the command’s Horn of Africa task force at Camp Lemonnier, Djibouti. AFRICOM also stated that there could be between 3,500 to about 5,000 rotational forces deployed during a major exercise. When AFRICOM was established, it inherited the activities previously conducted by its predecessors. Many of these activities reflect DOD’s shift toward building the security capacity of partner states, a mission area noted in the department’s 2010 Quadrennial Defense Review. Building security capacity furthers the U.S. objective of securing a peaceful and cooperative international order and includes such activities as bilateral and multilateral training and exercises, foreign military sales and financing, officer exchange programs, educational opportunities at professional military schools, technical exchanges, and efforts to assist foreign security forces in building competency and capacity. In particular, AFRICOM’s inherited activities to build partner capacity, some of which involve coordination with State, range from efforts to train African soldiers in conducting peacekeeping operations to assisting African nations in combating terrorism, and they include one of the largest U.S. military activities in Africa, Operation Enduring Freedom–Trans Sahara. The areas of responsibility and examples of activities transferred to AFRICOM from the U.S. Central, European, and Pacific Commands are presented in figure 2. AFRICOM emphasizes that it works in concert with interagency partners, such as USAID, to ensure that its plans and activities directly support U.S. foreign policy objectives. On the African continent, DOD focuses on defense, State plans and implements foreign diplomacy, and USAID leads foreign development, including efforts to support economic growth and humanitarian assistance. DOD issued Joint Publication 3-08 in March 2006 to provide guidance to facilitate coordination between DOD and interagency organizations. The publication acknowledged that the various U.S. government agencies’ differing, and sometimes conflicting, goals, policies, procedures, and decision-making techniques make unity of effort a challenge, but noted that close coordination and cooperation can help overcome challenges. The 2008 National Defense Strategy identified AFRICOM as an example of DOD’s efforts toward collaborating with other U.S. government departments and agencies and working to achieve a whole-of-government approach. Additionally, the 2010 Quadrennial Defense Review identified the need to continue improving DOD’s cooperation with other U.S. agencies. In particular, the report stated that DOD will work with the leadership of civilian agencies to support those agencies’ growth and their overseas operations so that the appropriate military and civilian resources are put forth to meet the demands of current contingencies. In our February 2009 report on AFRICOM, we noted that after DOD declared AFRICOM fully operational, concerns about AFRICOM’s mission and activities persisted among various stakeholders. Concerns included areas such as humanitarian assistance and other noncombat activities that involve non-DOD agencies and organizations. The concerns centered on the view that AFRICOM could blur traditional boundaries between diplomacy, development, and defense. In some cases, the apprehensions stemmed from DOD having more resources than other agencies and thus it could dominate U.S. activities and relationships in Africa. Among African nations, we found that there was some concern that AFRICOM would be used as an opportunity to increase the number of U.S. troops and military bases in Africa. AFRICOM has created overarching strategic guidance and has led activity planning meetings with its stakeholders such as State. However, activities are being implemented as the detailed supporting plans for conducting many activities have not yet been finalized. Moreover, AFRICOM has postponed time frames for completing several of these supporting plans by about 2 years. Without supporting plans, AFRICOM cannot ensure that the activities of its components are appropriate, comprehensive, complementary, and supportive of its mission. AFRICOM has published command-level overarching strategic guidance and has led activity planning meetings with its components and interagency partners. Strategic plans are the starting point and underpinning for a system of program goal-setting and performance measurement in the federal government. DOD strategic planning guidance, issued in 2008, requires each geographic combatant command to produce a theater campaign plan and specific posture requirements for its given area of responsibility. In September 2008, AFRICOM published its theater strategy, a 10-year strategy describing the strategic environment in which the command operates. In May 2009, the Secretary of Defense approved AFRICOM’s theater campaign plan, a 5-year plan that describes the command’s theater strategic objectives, establishes priorities to guide the command’s activities, and provides guidance to the command’s staff and components. In its theater campaign plan, AFRICOM outlined priority countries that are of strategic importance, and it identified its theater strategic objectives, such as defeating the al-Qaeda terrorist organization and associated networks in Africa; ensuring that capacity exists to respond to crises; improving security-sector governance and stability; and protecting populations from deadly contagions. AFRICOM officials said that they worked with State and USAID officials to incorporate their perspectives into the theater campaign plan. However, AFRICOM officials observed that the Africa strategies for State and USAID have different timelines from those of AFRICOM, thus posing a challenge for alignment among the command and its interagency partners. For example, AFRICOM’s theater campaign plan covers fiscal years 2010 through 2014, whereas the State/USAID strategic plan spans fiscal years 2007 through 2012. In addition to developing its theater strategy and campaign plan, AFRICOM has also led activity planning meetings for future activities. The command has held annual Theater Security Cooperation Conferences, which include officials from AFRICOM, its components, U.S. embassies, and other federal agencies. At these meetings, AFRICOM proposes activities to conduct for the following fiscal year, and it engages with other federal agency officials to coordinate and implement activities. Additionally, for individual activities, AFRICOM may hold multiple planning meetings prior to implementation. For example, for AFRICOM’s Natural Fire 10 pandemic preparedness and response activity, four phases of planning occurred during the year prior to the exercise. These phases included: concept development, in which potential focuses for the exercise were discussed; initial planning, in which the final focus of the exercise and its location were determined; main planning, in which key partners determined the activities that would make up the exercise; and final planning. Similarly, in July 2009, we observed the main planning conference for activities of the Africa Partnership Station’s USS Gunston Hall, which was deployed from March through May 2010. This conference built upon the progress of the initial planning conference, and it was followed by a final planning conference to identify specific details for the activity. During our observation of the main planning conference, we noted that AFRICOM’s Navy component engaged DOD, interagency, and African partners in the coordination of Africa Partnership Station events. Although AFRICOM has developed overarching strategic guidance and led planning meetings, it lacks specific supporting plans on conducting activities, which hinders planning and implementation efforts. As we previously reported, an agency should cascade its goals and objectives throughout the organization and should align performance measures with the objectives from the executive level down to the operational levels. While AFRICOM’s theater campaign plan identifies strategic objectives, it does not include detailed information on how to plan, implement, or evaluate specific activities. Rather, the theater campaign plan states that AFRICOM is to create specific supporting plans—(1) component support plans, (2) regional engagement plans, and (3) country work plans—with more detailed information. However, AFRICOM has not yet approved its military service components, special operations command, and task force support plans for use in guiding their activities. Furthermore, the command has not completed its five regional engagement plans or country work plans for Africa (see fig. 3). In reviewing AFRICOM’s theater campaign plan, we found that it provides overarching guidance but does not include specific information such as detailed activity information and the amount of effort focused on specific countries or regions. Rather, AFRICOM’s theater campaign plan sta specific supporting plans will provide this information. To examine how another combatant command approaches planning, we compared AFRICOM’s theater campaign plan to that of the U.S. Southern Comm a more mature DOD geographic combatant command that operates in t Americas and Caribbean, which, like AFRICOM, also has a focus on building partner capacity and collaborating with interagency partne While this comparison was not meant to conclude that one comb command’s approach is superior to the other, our analysis did find differences between the two plans. For example, we noted that AFRICOM’s theater campaign plan identifies only one activity—the African Partners Enlisted Development program—and calls for the establishment of regional engagement plans to focus on activities and programs. In contrast, Southern Command’s theater campaign p includes detailed information on dozens of its activities, and no supporting regional engagement plans are required. Additionally, although AFRICOM’s theater campaign plan identifies priority countries or for each of its theater strategic objectives, it calls for supporting regiona engagement plans and country work plans to provide additional information on regional and country information. In contrast, Southern Command’s theater campaign plan specifically details the percentage of engagement effort that will be directed toward each region and count essence, it appears that both Southern Command and AFRICOM requ that similar types of information on regional efforts and activities be incorporated into plans. The difference is that AFRICOM ’s approach requires the completion of supporting plans while Southern Command provides this information in its theater campaign plan. AFRICOM’s specific supporting plans—its components’ support pl regional engagement plans—have not yet been completed. AFRICOM’s theater campaign plan required that component support plans be completed by each AFRICOM component no later than December 1, 2009 to address activities for fiscal years 2010 through 2012. According to AFRICOM, as of June 2010, four of the six component support plans had nder been developed and were ready to present to the AFRICOM comma ped in for approval. The Navy’s supporting plan, for example, was develo November 2009, but had not yet been signed out by the AFRICOM commander. AFRICOM’s theater campaign plan also requires the development of five regional engagement plans—North, East, Central, West, and South—to provide more detailed regional, country, and , programmatic guidance. Specifically, AFRICOM’s theater campaign plan states that both it and the regional engagement plans provide the command’s prioritization of time, effort, and resources for all steady-state activities that the command executes. The theater campaign plan states that regional engagement plans should contain three elements: (1) regio planning guidance, which highlights key objectives for each region t hat must nest within the theater security objectives outlined in the theate campaign plans; (2) a 2-year calendar that depicts planned security cooperation engagement activities, month by month, and country by country, for the region; and (3) country work plans, which should be developed for each critical partner identified in the theater campaign The country work plans should include a detailed list of activities and events designed to make progress toward objectives for each region within a particular country, and they are required to be aligned with U.S. embassy Mission Strategic and Resource Plans to ensure unity of effort. At the time we completed our audit work, the regional engagement plans had not been approved by the command, and the country plans were still in the process of being developed. plan. Furthermore, AFRICOM has postponed time frames for completing sever of its supporting plans. For example, completion of the regional engagement plans has been repeatedly delayed throughout our review— postponed by about 2 years—from February 2009 to October 2009 to May ad 2010 to the first quarter of fiscal year 2011. While AFRICOM officials h previously told us that component support plans would be completed by December 2009, officials later stated that they expect the plans to be completed within 60 days of the regional engagement plans. DOD officials told us that AFRICOM held a planners’ conference in April 2010 and tha draft plans, such as country work plans, were discussed at this meetin obtain the components’ input. Moreover, in the absence of plans, DO stated that AFRICOM holds weekly meetings with the components to discuss activities. However, by conducting activities without h specific plans in place to guide activity planning and implementation, A FRICOM risks not fully supporting its mission or objectives. Without having approved component support plans and regional engagement plans, AFRICOM and its components cannot be sure that they are conducting activities that align with the command’s priorities. Currently, each of the military service components has established priority countries/areas in Africa, but in some cases they overlap or differ from each other and also differ from the priority countries that AFRICOM has identified. Air Force component officials told us, for example, that they used AFRICOM’s designation of priority countries to inform their initial identification of priority countries, but they also considered where U.S. Europe Command’s Air Force component had prior engagements or existing relationships with Africans. These officials told us that they recently updated their priority countries based on their own objectives. The officials explained that, because the Air Force component has different objectives than AFRICOM’s other military service components and because certain African countries have varying levels of Air Force capabilities, their designated priority countries would not necessarily coincide with those of other military service components. Marine Corps component officials said that their designated priority countries reinforce AFRICOM’s designated “willing and capable” African nations; however, our analysis shows that the priority countries identified by AFRICOM and those identified by its Marine Corps component also do not fully align. Additionally, activities currently conducted by the military service components may overlap with AFRICOM’s Combined Joint Task Force– Horn of Africa’s operating area. AFRICOM stated that in the absence of completed supporting plans, it has taken some steps to coordinate activities among its components, including the use of an information database to manage individual activities. AFRICOM stated that use of the database helps ensure a unified effort among the components. While component officials we spoke with said that the database can help them determine whether another AFRICOM component is planning an activity within a similar time frame or with the same African country, they noted that use of the database is preliminary within AFRICOM and that not all component activities may be included in the database. Air Force component officials said that they currently lack visibility and coordination with the other components for the full range of activities, and as a result, they may be unaware of some activities being planned or conducted by other AFRICOM components. Similarly, officials from AFRICOM’s Army component stated that perhaps the greatest challenge to creating positive conditions in Africa is ensuring that U.S. defense efforts remain synchronized; if plans are not coordinated, their efforts could have unintended consequences, such as the potential for Africans to perceive the U.S. military as trying to influence public opinion in a region sensitive to the military’s presence. Until AFRICOM completes specific plans to guide its activity-planning efforts and determines whether priorities are appropriately aligned across the command, it cannot ensure that the efforts of its components are appropriate, complementary, and comprehensive. AFRICOM has yet to make critical decisions about the future of its Horn of Africa task force, including what changes, if any, are needed for the task force or its activities to best support the command. In April 2010, we reported that AFRICOM had not decided whether changes are needed to the task force’s mission, structure, and resources to best support the command’s mission of sustained security engagement in Africa. Moreover, AFRICOM has stated that, as the capabilities of its military service components become mature, the command will determine the best course of action for transferring task force activities to the other components as necessary to ensure sustained security engagement with African countries within the task force’s operating area. Some military service component officials said that coordination with the task force can be difficult. For example, Air Force component officials said that it has been challenging to coordinate with the task force because it is unclear how the task force’s roles, responsibilities, and efforts align with those of AFRICOM and the Air Force component. With the exception of the task force, each of AFRICOM’s component commands is located in Europe and does not have assigned forces (see fig. 1). To conduct their activities, forces for AFRICOM’s military service component activities are requested through a formal Joint Staff process. Force planning currently occurs within the Joint Staff 2 years prior to the designated fiscal year; forces needed for emergent requirements must typically be requested 120 days in advance. AFRICOM officials told us that the command must request forces and equipment for its military service components to carry out any type of activity in Africa—whether it be a large-scale operation or additional personnel needed to travel to the continent to plan a future program. Moreover, they said that AFRICOM does not always receive the forces or equipment it requests for an activity because DOD may have higher-priority needs. From AFRICOM’s and some military service components’ perspective, having to formally request forces for all activities may affect AFRICOM’s effectiveness if there are greater DOD priorities. Furthermore, the special operations command component stated that, without assigned forces, it cannot act as a crisis- response force, which is the role of special operations commands in other combatant commands. AFRICOM has occasionally used Combined Joint Task Force–Horn of Africa personnel with appropriate skill sets outside of its operating area and area-of-interest countries, such as in Liberia and Swaziland, and these forces could potentially be leveraged for other activities. Completing an evaluation of the task force in a thorough yet expeditious manner and clearly articulating any needed changes to the task force’s mission, structure, and resources will aid in AFRICOM’s efforts to plan and prioritize the many activities it inherited upon its establishment and ensure that personnel and resources are applied most effectively to enhance U.S. military efforts in Africa. It is unclear whether all of the activities that AFRICOM has inherited or is planning fully align with its mission of sustained security engagement in Africa because, in addition to unfinished strategic plans, AFRICOM is generally not measuring the long-term effects of its activities. Our prior work has highlighted the importance of developing mechanisms to monitor, evaluate, and report on results, and we have previously reported that U.S. agencies cannot be fully assured that they have effectively allocated resources without establishing an assessment process. In addition, according to Standards for Internal Control in the Federal Government, U.S. agencies should monitor and assess the quality of performance over time. The lack of clear, measurable goals makes it difficult for program managers and staff to establish linkages between their day-to-day efforts and the agency’s achievement of its intended mission. The Government Performance and Results Act also emphasizes that agencies should measure performance toward the achievement of goals. Moreover, AFRICOM’s theater campaign plan requires assessments of theater security cooperation activities. AFRICOM has developed a tool to measure progress in meeting its strategic objectives. The tool measures objective factors (e.g., number of identified al-Qaeda members in a country), subjective factors (e.g., likelihood of an imminent terrorist attack), and perceptive factors (e.g., the level of protection against terrorism Africans expect their governments can provide). However, AFRICOM officials told us that this tool is used primarily for strategic planning purposes and not for follow-up on individual activities. Moreover, beyond AFRICOM, our prior work has shown that DOD and State have conducted little monitoring and evaluation of certain security assistance programs. Specifically, DOD and State have not carried out systematic program monitoring of funds for projects that, among other things, train and equip partner nations’ militaries to conduct counterterrorism operations. Instead, reporting has generally consisted of anecdotal information, although DOD has taken initial steps to establish systematic program monitoring. For example, DOD has hired a contractor to identify current project roles, data sources, and ongoing assessment activities in order to develop a framework for assessing projects. However, DOD officials stated that they had not consistently monitored these security assistance projects, and State officials were not involved with or aware of a formal evaluation process. Our review of 58 proposals for security assistance projects in African countries from fiscal years 2007 to 2009 revealed that only 15, or 26 percent, of the proposals included a description of how the activities would be monitored over time. In addition, only 10 of the project proposals, or 17 percent, included information related to program objectives or anticipated outcomes. While some activities appear to support AFRICOM’s mission, others may have unintended consequences—which underscores the importance of consistently measuring the long-term effects of the full range of the command’s activities. AFRICOM has stated that a primary purpose of its activities is to build partner capacity. The two activities we reviewed in depth appear to support this mission. First, the Africa Partnership Station initiative builds maritime security capabilities of African partners through ship- and land-based training events focused on areas such as maritime domain awareness, leadership, navigation, maritime law enforcement, search and rescue, civil engineering, and logistics (see app. I). Second, the command’s Natural Fire 10 exercise brought together participants from Burundi, Kenya, Rwanda, Tanzania, and Uganda to build partner capacity in responding to a pandemic influenza outbreak (see app. II). Moreover, State and U.S. embassy officials said that peacekeeping and military-to- military training activities help support embassy goals and U.S. foreign policy objectives in African nations. For example, the U.S. embassy in Algeria stated that AFRICOM’s activities directly support the embassy’s objectives of counterterrorism cooperation and engaging with and modernizing the Algerian military. In addition, a senior official at the U.S. embassy in Mozambique told us that AFRICOM supports the embassy’s goals pertaining to maritime security and professionalizing Mozambique’s military. However, based on concerns raised by interagency officials, other activities may not fully align with U.S. foreign policy goals or they may not reflect the most effective use of resources. For example, State officials expressed concern over AFRICOM’s sponsorship of a news Web site about the Maghreb, citing the potential for Africans to perceive the U.S. military as trying to influence public opinion. State had previously told us that countries in the Maghreb are very sensitive to foreign military presence, and if a program is marketed as a U.S. military activity or operation, it may not be well received among these nations. AFRICOM officials said that they had inherited this activity from U.S. European Command and that they have been working closely with State in its implementation. Moreover, DOD officials observed that, with respect to the Maghreb news Web site sponsorship, the intent of the activity is to influence African public sentiment—the same effect for which some State officials have expressed concern. They said that State supports this as a foreign policy goal in Africa, and senior State officials have endorsed the Maghreb news Web site sponsorship activity. Similarly, some officials questioned whether the U.S. military should conduct a musical caravan activity in Senegal, which is intended to promote peace by having local artists provide free concerts throughout the country. State officials noted that the activity has overwhelmed embassy staff, who had to spend significant time ensuring that AFRICOM’s effort was appropriately aligned with embassy goals. AFRICOM officials acknowledged that there have been some concerns with this activity and that it is being reviewed by both the command and State. However, AFRICOM noted that all activities within a country are reviewed and approved by the U.S. embassy before they are executed. However, at the U.S. embassy level, officials also expressed concern about some of AFRICOM’s activities. For example, according to one U.S. embassy, AFRICOM’s sociocultural research and advisory teams, which comprise one to five social scientists who conduct research and provide cultural advice to AFRICOM, seem to duplicate other interagency efforts. AFRICOM officials told us that they use the information provided by the teams to help guide operations in Africa and obtain perspectives on cultural sensitivities among the local populations. However, the embassy expressed concern about the U.S. military performing this type of research itself instead of coordinating with interagency partners to gain sociocultural information. Moreover, an internal State memo emphasized the need for close coordination among AFRICOM’s research teams and U.S. embassies. In March 2010, the Secretary of State issued guidance to U.S. embassies in Africa on AFRICOM’s sociocultural research and advisory activities, stating that AFRICOM’s research teams will share their findings with embassy staff and other government counterparts. Finally, State and USAID officials we contacted at one U.S. embassy expressed concern that some of the activities that AFRICOM’s Horn of Africa task force had previously proposed, such as building schools for an African nation, did not appear to fit into a larger strategic framework, and said that they did not believe the task force was monitoring its activities as needed to enable it to demonstrate a link between activities and mission. The embassy officials cited a past example where the task force had proposed drilling a well without considering how its placement could cause conflict in clan relationships or affect pastoral routes. While concerns raised about specific AFRICOM activities may or may not be valid, without conducting long-term assessments of activities, AFRICOM lacks the information needed to evaluate the effects of the full range of its activities, to be able to respond to critics if need be, and to make informed future planning decisions. AFRICOM appears to perform some follow-up on activities shortly after their completion, but the command is generally not measuring the effects of activities over the long term. AFRICOM officials we met with while observing the command’s Natural Fire 10 pandemic preparedness and response activity in Uganda told us that the command planned to produce an “after action” report after the activity, but they acknowledged that AFRICOM needs to develop a method to perform longer-term assessments on activities. With respect to the Natural Fire engineering projects, for example, the officials said that AFRICOM does not know whether projects such as reconstructing a school will have a sustainable effect on the community. AFRICOM’s Humanitarian Assistance Branch has developed an assessment tool for Natural Fire that relates to the command’s security objectives, but an official told us that AFRICOM is still determining exactly what will be assessed with respect to the activity. AFRICOM also envisions continuing its work on pandemic response by engaging bilaterally with each of the countries involved in the 2009 Natural Fire exercise. DOD, State, and officials we contacted at several U.S. embassies in Africa also stated that, from their perspectives, AFRICOM is not measuring the long-term effects of its activities in Africa. State officials told us, for example, that AFRICOM’s Military Information Support Teams, which are intended to support State and U.S. embassies by augmenting or broadening existing public-diplomacy efforts, are not assessing the effect of their efforts. In addition, while the Africa Partnership Station activity has been viewed as a successful African partner training platform, concerns were raised that it may have taken on too many training activities—which range from maritime domain awareness to maritime law enforcement to civil engineering to humanitarian assistance efforts. With the potential for its mission to become amorphous or lose its effectiveness, it was suggested that the Africa Partnership Station might be more effective if it targeted its resources toward fewer activities. In our April 2010 report on AFRICOM’s Horn of Africa task force, we noted that the task force performs some short-term follow-up on activities, but AFRICOM officials said that the task force has not historically been focused on performing long-term assessments on activities to determine whether the activities are having their intended effects or whether modifications to activities need to be made. In response to our report, the task force acknowledged that it needed to improve its ability to evaluate the effectiveness of its activities. The task force stated that it had taken steps to incorporate measures of performance and effects in its planning process so that it can determine whether its activities are achieving foreign policy goals. The command’s sociocultural research and advisory team in the area is also being used to help assess task force activities, and the task force is beginning to follow-up on past activities, such as medical clinics, to determine their effects over time. We commend the task force for these efforts, which could serve as models for implementing long-term activity assessments across AFRICOM. AFRICOM’s limited long-term evaluation of activities to date may result, in part, from the differences in agency cultures among DOD, State, and USAID. Officials from State and USAID told us that their agencies are focused on monitoring and on long-term results, while they viewed DOD as having a tendency to take a short-term approach focused on immediate implementation or results. Similarly, nonprofit-organization officials said that, from their perspective, the U.S. military tends to view development activities on a onetime basis and is not focused on monitoring or measuring the effects of an activity after completion. They voiced concern that AFRICOM will not know whether its activities are effective or be in a position to evaluate the quality of the services its activities may be providing. Long-term evaluation can be difficult to achieve but remains nonetheless important for AFRICOM in meeting its mission in Africa. While some activities may promote temporary benefits for the participants, their short- term nature or unintended long-term effects could potentially promote unfavorable views of the U.S. military among partner nations. We previously reported, for example, that AFRICOM’s Horn of Africa task force had built a well for a local African community, but it did not teach the community how to maintain it. AFRICOM officials stated that they recognize the difficulties associated with measuring long-term effects of activities, particularly the ability to link an action to a desired effect. For example, AFRICOM Navy component officials told us that it is difficult to measure the Africa Partnership Station’s return on investment because changes in Africa can be incremental and thus it can be difficult to determine whether the activity caused the change or whether the effects will persist over time. The Navy has been working with the Center for Naval Analyses to assess the Africa Partnership Station. Center for Naval Analyses officials told us that their work has shown that Africa Partnership Station training has been successful in changing African participants’ attitudes toward maritime safety and security activities but that it has been more difficult to show changes in the behavior of participating African nations. Despite the challenges associated with measuring long-term effects, implementing such assessments for all of its activities can help AFRICOM make successful future planning decisions and allocate resources to maximize its effect in Africa. Some AFRICOM staff face difficulties in applying funding to activities, which can pose challenges in implementing activities and impede long- term planning efforts. AFRICOM stated that it had access to 15 different funding sources to fund its activities in fiscal year 2009. In addition, AFRICOM reported that it influences other State and USAID funding sources—such as funds for State’s Global Peacekeeping and Operations Initiative and International Military Education and Training, and USAID’s Pandemic Response Program—but that these funding sources are not managed by the command. We consistently heard from officials at AFRICOM and its components that applying funding to activities was not well understood by staff and that they lacked expertise to effectively carry out this task. For example, Army component officials told us that activities must be designed to meet specific criteria in order to be granted funds and that their staff do not have the skills required to understand the complexities of funding. Similarly, Navy and Air Force component officials said that staff spend substantial amounts of time trying to determine which funding sources can be appropriately applied to which activities. Many different funding sources may be required for small segments of an activity, such as transportation or lodging for participants. Determining which specific funding sources should be used for various activities has sometimes resulted in problems with activities. Officials cited instances in which limited understanding resulted in African nations having their invitations to AFRICOM-sponsored activities rescinded or in activities having to be canceled. In two recent instances, an official said that AFRICOM essentially disinvited two-thirds of the intended participants for activities at the last minute because it was discovered that certain funding sources could not be used to support the participants. This caused much embarrassment and frustration for the Africans who had planned to attend the activities. Marine Corps component officials said that difficulties in identifying the appropriate funding source prevented them from responding to African requests for activities, causing the cancellation of some peacekeeping exercises. AFRICOM’s Navy component has also struggled with the application of multiple funding sources to the Africa Partnership Station activity, an official explained, occasionally resulting in delayed submissions of funding packages to U.S. embassies for approval. Table 1 shows eight different funding sources required for theater security cooperation activities associated with the Africa Partnership Station’s 2009 USS Nashville deployment. According to AFRICOM’s Navy component, funding a large activity like the Africa Partnership Station on a 1-year planning horizon has hindered the ability to conduct persistent training efforts. Officials said that funding sources, such as the Combatant Commander Initiative Fund, are only available for a year and must be applied only to new initiatives. Similarly, Global War on Terrorism funds, now known as Overseas Contingency Operations funds, are supplemental appropriations, which officials said do not provide permanency for the activity. Our prior work has encouraged DOD to include known or likely project costs of ongoing operations related to the war on terrorism in DOD’s base budget requests. Navy component officials told us that Africa Partnership Station may get its own funding line for fiscal years 2011 through 2015. If approved by the President, Navy component officials believe the dedicated budget line would help facilitate funding the activity, although AFRICOM added that the Africa Partnership Station will still require several funding sources to support the activity. In its 2010 Quadrennial Defense Review, DOD stated that U.S. security assistance efforts are constrained by a complex patchwork of authorities and unwieldy processes. Several AFRICOM and component officials we contacted agreed, with some stating that funding challenges hampered their ability to sustain relationships in Africa. AFRICOM stated that the limitations of current funding sources create a continuing challenge for the command, noting that some funding sources were not designed for the types of activities AFRICOM carries out and thus do not adequately support AFRICOM’s mission of sustained security engagement. Army component officials said that funding sources available for activities tend to be short term and must be used in a finite time frame, which limits long- term planning capabilities and the ability to have a sustained presence in Africa. AFRICOM’s special operations command officials said that the lack of sustainable funding sources has created a short-term, unsustainable approach to the command’s activities, describing their efforts as sporadic connections with African countries with which they should have enduring relationships. Marine Corps component officials described having to ask AFRICOM for funds for activities that fall outside of funding cycles, noting the need for streamlined funding for effective sustained engagement in Africa. Our prior work on security assistance activities also found that the long- term effect of some projects may be at risk because it is uncertain whether funds will be available to sustain the military capabilities that the projects are intended to build. There are limits on the use of U.S. government funds for sustainment of certain security assistance projects, and most participating countries have relatively low incomes and may be unwilling or unable to provide the necessary resources to sustain the projects. Moreover, officials told us that the process for submitting proposals for security assistance projects is lengthy, requiring them to begin writing the next fiscal year’s plans before the last year’s are processed, and that the time frames for receiving and applying the funding from the various funding sources needed for the project do not necessarily align with one another. For example, AFRICOM might apply resources from one funding source to deliver a maritime vessel to an African country, but the resources that must be obtained from another funding source to train the recipients on how to use the vessel may fall within a different time frame. DOD guidance emphasizes the need for proper training and staffing to increase effectiveness in budgeting. AFRICOM component officials told us that guidance or training on applying funding sources to activities would be helpful. When we asked about funding expertise within AFRICOM, Air Force component officials said that it is difficult to find assistance at AFRICOM because officials must first be able to identify the appropriate funding source in order to ask the correct AFRICOM staff member about that source. From their perspective, no individual at AFRICOM or its Air Force component command has comprehensive knowledge of all available funding sources for activities. AFRICOM officials said they provide the components guidance on the Combatant Commander Initiative Fund and noted that AFRICOM does not provide the actual funding to the components for many sources they use to fund activities. Additionally, they said that the command is researching funding sources available for activities, which they believe will help AFRICOM better define which sources can be applied to which activities. Our April 2010 report on AFRICOM’s Horn of Africa task force found similar issues among the task force’s budget staff. According to task force officials, budget staff must master a steep learning curve to understand the provisions associated with these funding sources because the task force comptroller and deputy comptroller are not financial specialists, generally do not work on military comptroller issues full time, and have short tour lengths. This steep learning curve can result in delays in conducting activities, as task force staff described spending extra time and resources understanding how to apply funding to activities. Moreover, AFRICOM stated that command staffing and tour lengths contribute to the difficulties in learning and maintaining knowledge of funding for task force activities. For example, task force staff had intended to continue providing training for senior enlisted Ethiopian military members through one type of funding source, but they later found that the source did not allow for training of foreign military members. Consequently, the staff had to revise their program from one of training officers to one of providing feedback to Ethiopian instructors. While eventually task force staff may correctly identify funding sources for their activities, their limited skills in applying funding may result in difficulties in implementing activities. We recommended that AFRICOM take actions to ensure that its task force budget personnel have the expertise and knowledge necessary to make timely and accurate funding decisions for activities. DOD concurred with our recommendation and cited some actions it had taken or planned— such as conducting on-the-job training and lengthening some tours for personnel—to augment critical skills among task force personnel. We believe the steps DOD outlined, if implemented in a timely and comprehensive manner, could help increase understanding and expertise associated with applying funding sources to activities within AFRICOM’s Horn of Africa task force. However, DOD’s comments were limited to AFRICOM’s task force personnel and do not address the lack of understanding of funding sources throughout the command. Without a greater understanding of how to apply funding to activities, AFRICOM will likely continue to face difficulties in implementing activities—including the potential that activities may be delayed, funds may not be effectively used, and African partner nations may be excluded from participating—as well as institutionalizing knowledge within the command. AFRICOM has made efforts to integrate interagency personnel into its command and collaborate with other federal agencies on activities, but it is not fully engaging interagency partners in planning processes. According to DOD and AFRICOM officials, integrating personnel from other U.S. government agencies into the command is essential to achieving AFRICOM’s mission because it will help AFRICOM develop plans and activities that are more compatible with those agencies. AFRICOM was established with two deputy commanders—a military commander that oversees military operations and a civilian commander for civil-military activities. The civilian commander directs the command’s activities related to areas such as health, humanitarian assistance, disaster response, and peace support operations. According to AFRICOM, this deputy commander—who is currently a State ambassador-level official—also directs outreach, strategic communication, and AFRICOM’s partner- building functions. As of June 2010, AFRICOM reported that it embedded 27 interagency partners into its headquarters staff, which represents about 2 percent of the total headquarters staff. These officials have been placed in several directorates throughout the command. The interagency staff came from several federal agencies, including the Departments of Energy, Homeland Security, Justice, State, and Treasury; USAID; the Office of the Director of National Intelligence; and the National Security Agency. The command also plans to integrate five foreign policy advisors from State later this year, according to officials at AFRICOM and State. Moreover, DOD has signed memorandums of understanding with nine federal agencies to outline conditions on sending interagency partners to AFRICOM. These memorandums cover such topics as the financial reimbursement between DOD and the federal agencies for participating employees, the length of time the interagency partner may reside at AFRICOM, and logistical provisions (housing, office space, etc.). Table 2 compares the reported number of interagency personnel at AFRICOM at the time it reached unified command status with that of June 2010. AFRICOM has had difficulty obtaining interagency officials to work at the command at the numbers desired. In February 2009, we reported that the command initially expected to fill 52 positions with personnel from other government agencies. However, according to DOD and AFRICOM officials, this initial goal was notional and was not based on an analysis of specific skill sets needed to accomplish AFRICOM’s mission. During our current review, command officials told us that there is no target number for interagency personnel, but rather that AFRICOM is trying to determine where in its command organization it could benefit from employing interagency personnel or where interagency partners would prefer to provide personnel. Command officials said that it would be helpful to have additional interagency staff at AFRICOM, but they understand that staffing limitations, resource imbalances, and lack of career progression incentives for embedded staff from other federal agencies may limit the number of personnel who can be brought in from these agencies. AFRICOM has coordinated with other federal agencies. For example, AFRICOM met with representatives from 16 agencies to gain interagency input into its theater campaign plan. We spoke with officials from State, USAID, and the Coast Guard who stated that they provided input into several additional strategy documents, including DOD’s Guidance for Employment of the Force and AFRICOM’s posture statement, as well as participated in activity planning meetings. State officials stated that AFRICOM has made improvements in taking their feedback and creating an environment that is conducive to cooperation across agencies. Similarly, USAID officials told us that AFRICOM has improved its coordination with their agency at the USAID headquarters level. Additionally, AFRICOM has created memorandums of understanding with some U.S. embassies, such as between AFRICOM’s Horn of Africa task force and the U.S. embassy in Kenya. This memorandum outlines procedures for conducting activities, actions to be taken by task force personnel in Kenya, and communication policies between the task force and the embassy, among other topics. While AFRICOM has made efforts to work with interagency partners, it is not fully engaging federal partners in activity planning processes in two areas. Our prior work has recommended, and the department generally agreed, that DOD provide specific implementation guidance to combatant commanders on the mechanisms that are needed to facilitate and encourage interagency participation in the development of military plans, develop a process to share planning information with interagency representatives early in the planning process, and develop an approach to overcome differences in planning culture, training, and capacities among the affected agencies. Some interagency officials have stated that AFRICOM (1) is not always involving other federal agencies in the formative stages of activity planning, and (2) does not fully leverage expertise of interagency personnel embedded at AFRICOM. While AFRICOM has made progress in coordinating with other federal agencies since its establishment, interagency partners may not be included in the formative stages of activity planning. DOD’s 2010 Quadrennial Defense Review states that the department will continue to advocate for an improved interagency strategic planning process. However, several federal agency officials said that AFRICOM tends to plan activities first and then engage partners, rather than including interagency perspectives during the initial planning efforts. Several interagency officials stated that AFRICOM has tended to develop initial activity plans before integrating interagency perspectives. Some U.S. embassy officials described AFRICOM’s annual activity planning meetings, the Theater Security Cooperation Conferences, as useful for bringing together AFRICOM and federal partners to plan for future AFRICOM activities; however, they noted that past meetings have been limited in their effectiveness because AFRICOM set the agenda without interagency input, which they viewed as restricting their role. Additionally, officials said that AFRICOM gave presentations of its planned exercises during one of its annual activity planning conferences, but there was not meaningful discussion with interagency partners on the most appropriate activities to conduct. One official described the embassies’ role at the conference as telling AFRICOM which proposed activities the embassies could not accommodate due to limited resources. Some federal officials suggested that interagency collaboration could be improved at AFRICOM’s annual activity planning conferences if State took a lead role, although limited State resources would make this unlikely. In general, both State and AFRICOM told us that funding shortages prevent some State officials from participating at AFRICOM planning events. Nonetheless, some State officials noted that AFRICOM could better align its activities with U.S. foreign policy goals and reduce the potential to burden U.S. embassy staff in carrying out activities if AFRICOM would involve interagency partners earlier in the planning process. From its perspective, AFRICOM said that State has had significant influence in its planning processes, noting that State’s deputy chiefs of mission, as well as USAID mission directors, were provided time to present information on their respective countries at the November 2009 Theater Security Cooperation Conference and that State officials are involved in other AFRICOM activity planning events throughout the year. Following AFRICOM’s most recent Theater Security Cooperation Conference, federal officials stated that the command’s integration of interagency perspectives had improved from previous conferences. The officials commented that AFRICOM officials appeared genuinely interested in learning about foreign policy and political issues in African countries from U.S. embassy officials and that the emphasis of many command presentations appeared to convey AFRICOM’s role as supporting U.S. embassies and furthering U.S. foreign policy goals. During our observations of an Africa Partnership Station planning conference in July 2009, AFRICOM and its Navy component officials acknowledged that they needed to improve communications among AFRICOM, its Navy component, and the U.S. embassies; since that time, we found that AFRICOM has taken some steps to address the problems. At that conference, an official at the U.S. embassy in Ghana stated that details of a previous USS Nashville port visit were not provided to the embassy prior to the ship’s arrival. Rather, when the ship arrived and the Navy component prepared to provide training, it was discovered that the proposed training did not meet the needs of the Ghanaian Navy. As a result, the U.S. embassy was required to work with AFRICOM’s Navy component to quickly put together a new training plan so that the Ghanaian Navy could receive more relevant training. According to a State official, AFRICOM should work on communicating the Africa Partnership Station’s mission in advance of its deployment because it is too late to conduct strategic communications once a ship is already in port. In response to concerns raised at the conference, AFRICOM has implemented a pilot program to help embassy public affairs offices generate public awareness of maritime security issues regarding 2010 Africa Partnership Station activities. As of February 2010, funding for the program had been provided to U.S. embassies in Gabon, Ghana, Senegal, and Mozambique. Conversely, our observation of the Natural Fire 10 pandemic preparedness and response exercise in Uganda illustrated that early and continuous interagency involvement can lead to a successful outcome. Prior to the initial planning for Natural Fire 10, DOD and USAID signed an interagency agreement to streamline collaboration in enhancing African military capacity to respond to an influenza pandemic. When AFRICOM began planning Natural Fire 10, it included USAID in the initial discussions to consider the feasibility of focusing a portion of the exercise on pandemic planning and response, as outlined in the interagency agreement. USAID also funded civilian participation in that portion of the exercise. In addition, State and U.S. embassy officials were included at all Natural Fire 10 planning conferences prior to the exercise. Furthermore, an embedded USAID official at AFRICOM told us that the pandemic focus of that portion of the Natural Fire 10 exercise was unique because it was designed more like a USAID activity than a DOD activity, having a longer-term focus to allow AFRICOM to sustain and expand the program over time. By working with interagency partners throughout the planning process, AFRICOM was able to sponsor an activity that was well received by its interagency partners. Interagency personnel embedded into AFRICOM’s organization may not be fully leveraged for their expertise, which can make it more difficult for some interagency personnel to contribute to the command’s work. Our prior work has noted that having a strategy for defining organizational roles and responsibilities and coordination mechanisms can help national security agencies clarify who will lead or participate in activities, organize their joint and individual efforts, and facilitate decision making. Although AFRICOM has included information on interagency collaboration in its theater campaign plan and created an interagency board to facilitate collaboration, an embedded interagency official stated that AFRICOM employs a hierarchal rather than collaborative approach to decision making. The command’s Army component echoed this sentiment, stating that coordination and development of strategies is less collaborative than on specific activities. This approach differs markedly from USAID and State’s planning approaches, which officials described as focusing on brainstorming with all relevant personnel or on the long-term results of the activities. Additionally, an embedded official from another federal agency told us that while AFRICOM officials bring some issues to interagency personnel at the command to obtain their perspectives, more often interagency staff must insert themselves into relevant meetings to affect decision making. For example, a USAID official formerly embedded at AFRICOM said that USAID embedded officials have to ask how they can help the command, even though he believed that the military officials should be asking how AFRICOM can provide support to USAID, as the command has stated that it is in a supporting role to USAID on development activities. Furthermore, some embedded interagency personnel said that coordination is problematic when activity planning takes place directly at AFRICOM’s military service component commands and not at AFRICOM headquarters, as there are few embedded interagency staff members in the military service components. State echoed this remark, noting that from its perspective, planning and decision making at the command’s military service components is separate from that at AFRICOM headquarters, which creates difficulties for coordination with interagency partners. As a result, many activities could have undergone substantial planning at the component level before interagency perspectives are sought. Moreover, some interagency personnel embedded at AFRICOM have said that they may not be fully leveraged for their expertise. AFRICOM officials told us that it is a challenge to determine where in the command to include the interagency personnel. For example, an official from the Transportation Security Administration decided on his own which directorate in which to work when he joined the command because AFRICOM had not identified a directorate for him. Another embedded interagency staff member stated that AFRICOM initially placed him in a directorate unrelated to his skill set, and he initiated a transfer to another directorate that would better enable him to share his expertise. In addition, Coast Guard officials stated that AFRICOM does not fully understand the roles and responsibilities of the Coast Guard and what knowledge and expertise it could provide the command. The officials cited an example of AFRICOM’s Navy component performing law enforcement training instead of allowing the Coast Guard to take the lead on providing this training to African forces. Difficulties in leveraging interagency partners are not unique to AFRICOM. As we have previously reported, organizational differences—including differences in agencies’ structures, planning processes, and funding sources—can hinder interagency collaboration, potentially wasting scarce funds and limiting the effectiveness of federal efforts. Notwithstanding these difficulties, interagency collaboration can be successful—for example, observers have cited the U.S. Southern Command as having mature interagency planning processes and coordinating mechanisms. Southern Command has also identified civilian federal agencies as leads for each of its theater security objectives, furthering the early involvement of interagency partners. A senior State official said that AFRICOM’s understanding of the roles of interagency partners might be improved if additional staff from other federal agencies were embedded at the command. However, several embedded interagency staff said that there is little incentive to take a position at AFRICOM because it will not enhance one’s career position upon return to the original agency after the rotation. Additionally, staffing shortages at other federal agencies reduce agencies’ abilities to send additional staff to AFRICOM. In February 2009, we reported that State officials told us that they would not likely be able to provide active employees to fill the positions requested by AFRICOM because they were already facing a 25 percent shortfall in mid-level personnel—although AFRICOM and State officials said that five State foreign policy advisors are expected to arrive at the command later this year. Despite challenges, AFRICOM has made some efforts that could improve interagency collaboration within the command, such as expanding its interagency orientation process and including opportunities for interagency input into daily command meetings. In addition, AFRICOM said that its Deputy to the Commander for Civil-Military Affairs, a senior State official, is in charge of outreach for the command and sometimes chairs command staff meetings. In fall 2009, the command conducted an assessment of the embedded interagency process to analyze successes and identify lessons learned, including recommendations on how to integrate interagency personnel into command planning and operations. AFRICOM identified five key observations based on its assessment: (1) embedded staff want to ensure they can accomplish their own objectives and not merely perform duties that a DOD employee could perform; (2) interagency personnel arrive at AFRICOM with the expectation that they will help achieve not only command goals and objectives but also U.S. government goals, yet they feel that DOD employees do not expect embedded personnel to develop new programs; (3) embedded interagency personnel need to understand the function, operation, and role of a military command and how it differs from other federal government agencies; (4) the military planning process is more structured than the planning approaches of other government agencies; and (5) embedded personnel experience an overwhelming adjustment to military culture. The assessment identified several recommendations and suggestions, such as developing a training and orientation program for embedded interagency personnel. In July 2010, AFRICOM stated that it had established an interagency command collaborative forum to assess, prioritize, and implement the recommendations from the study. Fully leveraging its embedded interagency partners can help AFRICOM contribute to a unified U.S. government approach to activity planning and implementation in Africa. AFRICOM emphasizes the importance of collaborating with its interagency partners and building cultural awareness; however, the command has sometimes experienced difficulty implementing activities because some personnel have limited knowledge about working with U.S. embassies and about cultural issues in Africa. The training or guidance available to augment personnel expertise in these areas is limited. Some AFRICOM personnel have limited knowledge of working with U.S. embassies and of African culture, which can decrease the effectiveness of implementing activities. AFRICOM emphasizes that it works closely with the U.S. embassies and chiefs of mission to ensure that its activities are consistent with U.S. foreign policy and contribute to unity of effort among the interagency. While many U.S. embassies told us that the command has made efforts to coordinate with them, some AFRICOM staff’s knowledge of how to work with U.S. embassies is limited. USAID officials told us that while AFRICOM has made improvements coordinating with their agency at the headquarters level, most USAID planning efforts occur at U.S. embassies in country and that AFRICOM has not fully integrated its staff into the planning process at the country level. Moreover, in our prior work on AFRICOM’s Horn of Africa task force, we reported that task force personnel did not always understand embassy procedures for interacting with African partner nations. For example, task force personnel would, at times, approach the Djiboutian government ministries directly with concepts for activities rather than follow the established procedure of having the U.S. embassy in Djibouti initiate the contact. Additionally, in our prior work on the Trans-Sahara Counterterrorism Partnership activity, we noted that disagreements about whether State should have authority over DOD personnel temporarily assigned to conduct activities affected implementation of DOD’s activities in Niger and Chad. In commenting on that report, DOD stated that it believed sufficient guidance existed that defined the authorities of DOD’s combatant commander and State’s chief of mission but noted that issuing joint guidance reflecting the implications of the shift to a greater DOD emphasis and support in shape and deter operations would be helpful to both the combatant commander and chief of mission in the Trans-Sahara Counterterrorism Partnership region. A senior State official formerly stationed at AFRICOM told us that command and control responsibilities in Africa are improving but that issues still exist. He cited a recent example in which the U.S. ambassador to Liberia maintained that the embassy should have authority over DOD personnel carrying out security sector reform activities in the country, while AFRICOM argued that it needed shared authority over these personnel. A shared authority agreement was eventually reached for DOD personnel who would reside in Liberia on a semipermanent basis. Some AFRICOM personnel’s limited knowledge of working with U.S. embassy staff can impose burdens on embassies because, as officials stated throughout our review, the embassies are short-staffed. The Department of State Inspector General released a report in August 2009 stating, in part, that the embassies in Africa are understaffed and that the U.S. military is filling a void created by a lack of embassy resources for traditional development and public diplomacy. AFRICOM’s requests for information and assistance with activities take embassy staff away from their assigned duties to focus on command priorities. For example, a U.S. embassy official in Uganda stated that AFRICOM personnel arrived in country with the expectations that the embassy would take care of basic cultural and logistical issues for them. AFRICOM is trying to increase its presence in U.S. embassies and send planning teams prior to activity implementation in order to alleviate the burden it has placed on U.S. embassies. According to command officials, AFRICOM inherited 12 offices at U.S. embassies in Africa, and as of June 2010, it had added 5 additional offices, bringing its total U.S. embassy presence to 17. Command officials told us that they plan to have a total of 28 offices in U.S. embassies, which would give AFRICOM a presence in just over half of the 53 countries in its area of responsibility. Additionally, at an Africa Partnership Station planning conference, we observed Navy component officials request guidance from and offer suggestions on how to ease the administrative burden the activity may place on U.S. embassy personnel. AFRICOM has also begun to send reservists to African countries to help with coordination prior to an Africa Partnership Station ship visit. By providing more assistance to the embassies, AFRICOM can potentially ease the burden placed on them as command staff work to increase their understanding of engaging with the embassies and partner nations. Cultural awareness is a core competency for AFRICOM, but the limited knowledge of some AFRICOM and its military service component staff on Africa cultural issues occasionally leads to difficulties in building relationships with African nations. For example, as we reported in our prior work on AFRICOM’s Horn of Africa task force, task force personnel did not always understand cultural issues, such as the time required to conduct activities in African villages or local religious customs. In one case, the task force distributed used clothing to local Djibouti villagers during Ramadan, which offended the Muslim population. In another case, according to a U.S. embassy official, AFRICOM’s task force provided 3 days notice that it would conduct a medical clinic in a remote village in Djibouti. However, because the villagers are nomads, it was difficult to get participants with that short amount of notice. Moreover, a Ghanaian military participant involved with the Africa Partnership Station said that AFRICOM’s tendency to generalize its programs across Africa is not effective, as each country is different and requires an individualized approach. A better understanding of African cultural issues would likely help AFRICOM improve relationships with African nations. For example, as we previously reported, a U.S. embassy official in Tanzania said that AFRICOM’s task force team members had become proficient in Swahili, thus helping them to develop relationships. Getting to know the language, culture, and the people in the region, the embassy official said, has contributed to the success in developing a Tanzanian-American partnership in a region where extremists are known to operate. In addition, an internal State memo described AFRICOM’s sociocultural research and advisory teams as intending to provide personnel with the necessary background to work more effectively on the ground and to interact in a more respectful and collaborative manner with local populations. While a U.S. embassy had voiced concern about the teams appearing to duplicate interagency efforts, the State memo stressed the need for coordination with embassy and USAID personnel, including the sharing of information obtained in the field. In general, more widespread and robust understanding of African culture could help personnel avoid potentially unfavorable views of AFRICOM among the Africans and risk straining relations between African nations and the U.S. government. We found that AFRICOM personnel and forces deploying for activities receive some training on working with interagency partners and on African cultural awareness—and that efforts are under way to increase training for some personnel—but our review of training presentations indicated that they were insufficient to adequately build the skills of its staff. Moreover, AFRICOM does not monitor training or require that it be completed. We have previously reported that collaborative approaches to national security require a well-trained workforce with the skills and experience to integrate the government’s diverse capabilities and resources, and that increased training opportunities and strategic workforce planning efforts could facilitate federal agencies’ ability to fully participate in interagency collaboration activities. AFRICOM officials told us that current training for personnel includes Web courses, seminars led by DOD’s Africa Center for Strategic Studies, and guest-speaker programs. In addition, there are theater entry training requirements for personnel deploying to Africa such as medical and cultural awareness Web-based training. Officials said, however, that while training is encouraged, it is not required, and that the command does not currently monitor the completion of training courses. We requested to review training presentations provided to incoming AFRICOM staff. Our review of the 10 training presentations that were provided to us by the command found that they did not contain cultural awareness information. However, AFRICOM stated that there are 2 hours on Africa cultural awareness provided to new command staff during the first day of training, though we were not given documentation of this training. Additionally, our review found that 7 of the 10 training presentations that we were provided did not contain interagency information. The remaining 3 presentations provided an overview of AFRICOM partners, including international government organizations, nongovernmental organizations, and other federal government agencies; identified the interagency partners at the command; and provided more detailed information on one specific federal agency. While these training presentations offered some suggestions for planning and cooperative opportunities with other federal agencies, we found that they were brief and lacked specific guidance on how to involve interagency partners. Furthermore, because the presentations are provided during the beginning of tours, when personnel are also learning about their new assignments and daily operations, it is unlikely that they provide for comprehensive, effective training. AFRICOM issued joint training guidance in December 2009 that included as a training goal the need to work with other federal agencies, but the guidance lacks specific actions to reach this goal as well as measures to evaluate progress and effects. Moreover, the guidance states that AFRICOM will develop predeployment guidance for personnel, but we noted that no time frames were provided for when the guidance will be issued. In our prior work on AFRICOM’s Horn of Africa task force, we reported that the task force’s training on working with U.S. embassies was not shared with all staff, and cultural awareness training was limited. We recommended, and DOD agreed, that AFRICOM develop comprehensive training guidance or a program that augments assigned personnel’s understanding of African cultural awareness and working with interagency partners. Since our report, AFRICOM has taken some steps to increase training opportunities for task force personnel. For example, we reviewed an extensive briefing on East African culture that the task force said is now being provided to all incoming task force personnel. In addition, the task force stated that its sociocultural research and advisory teams provide some task force personnel with cultural and political training when needed, including training for some personnel prior to deployment. Finally, the task force said that online training on cultural awareness is now available to all task force personnel, and that it intends to make this training mandatory in the future. Formal training is important because it would help institutionalize practices in the command. Officials from AFRICOM’s Army, Marine Corps, and Air Force components and task force all voiced a preference for more cultural training and capabilities, with Army officials noting that staff do not have sufficient understanding of the size, diversity, and unique problems confronting the different regions of Africa. In addition, during our observation of Natural Fire 10, an Air Force official told us that his team received no training on Ugandan culture prior to its deployment. An AFRICOM official told us it would be beneficial to have increased sociocultural training at the command’s headquarters as well as a database to monitor training completion. AFRICOM’s Air Force component officials told us that they have begun working with the Air Force Cultural Language Center to develop a Web-based, African cultural awareness training for Air Force personnel deploying on AFRICOM activities, but officials noted that AFRICOM had not provided any cultural awareness training to the Air Force. Several officials from other federal agencies suggested possible courses that might be cost-effective or easy for AFRICOM to implement, such as a State online course focused on working with U.S. embassies, curricula at the Foreign Service Institute that prepare U.S. embassy personnel, or training similar to that provided to Peace Corps volunteers. State also recommended that AFRICOM develop best practices for working more effectively and efficiently with other agencies to ensure that any lessons learned are institutionalized within the command. In June 2010, AFRICOM held a symposium to discuss how to augment language, regional expertise, and cultural competence capabilities. The command identified some options under consideration to improve capabilities, including possibly establishing an office to develop training initiatives, holding an annual symposium, and developing a newsletter with articles by personnel about their deployment experiences. These considerations reflect the command’s recognition that it needs to improve its personnel’s expertise. However, until AFRICOM develops, requires, and monitors training for all of its personnel on working with interagency partners and understanding African cultural issues, it continues to risk being unable to fully leverage resources with U.S. embassy personnel, build relationships with African nations, and effectively carry out activities. Building the capacity of partner nations to secure and defend themselves has become a key focus of DOD, and AFRICOM’s focus on supporting security and stability in Africa has the potential to advance this effort. Despite initial concerns among stakeholders about the potential U.S. militarization of foreign policy or increasing the U.S. military footprint on the continent, AFRICOM has made progress in developing overarching strategies and trying to engage interagency partners. Moreover, since our April 2010 report on AFRICOM’s task force, efforts have been made to begin to evaluate some task force activities in the Horn of Africa. However, AFRICOM still faces challenges that could limit its effectiveness. Until the command completes supporting plans to guide activity planning and implementation and begins consistently conducting long-term assessments of activities, it cannot ensure that the actions it is taking on the continent best support DOD and U.S. foreign policy objectives. On a broader level, without plans and assessments, AFRICOM lacks the critical information it needs to make successful future planning decisions and to allocate resources to maximize its effect in Africa. Moreover, while many U.S. embassies and federal partners now believe that AFRICOM has the potential to make positive contributions in Africa, until the command more fully incorporates interagency partners into its activity planning process, AFRICOM continues to risk the perception—or worse, the possibility—of conducting activities that may counter U.S. foreign policy interests or lead to unintended consequences. Finally, assigning more than 4,000 personnel and forces to AFRICOM and its components illustrates DOD’s commitment to conducting activities in Africa. Developing a well- trained workforce that understands the complexities associated with working on the continent can advance the department’s efforts to foster stability and security through improved relationships with African nations. To more effectively plan, prioritize, and implement activities in a collaborative interagency environment that aligns with both the command’s mission of sustained security engagement and U.S. foreign policy goals; make effective use of resources in a fiscally constrained environment; and take steps to institutionalize its processes and procedures, we recommend that the Secretary of Defense direct the Commander, AFRICOM, to take the following five actions: Synchronize activities among AFRICOM’s components by expediting the completion of its regional engagement plans, country work plans, and component support plans; and develop a process whereby plans are reviewed on a recurring basis to ensure that efforts across the command are complementary, comprehensive, and supportive of AFRICOM’s mission. Conduct long-term assessments of the full range of its activities to determine whether the activities are having their intended effects and supporting AFRICOM’s mission. Take actions to ensure that budget staff within its military service components, special operations command, task force, and Offices of Security Cooperation within U.S. embassies in Africa have the expertise and knowledge necessary to make timely and accurate funding decisions for activities. These actions could include some combination of training, staffing changes, and/or comprehensive guidance on applying funding sources to activities. Fully integrate interagency personnel and partners into the formative stages of the command’s activity planning processes to better leverage interagency expertise. In consultation with State and USAID, develop a comprehensive training program, with a means to monitor completion, for staff and forces involved in AFRICOM activities on working with interagency partners and U.S. embassies on activities and cultural issues related to Africa. In its written comments on a draft of this report, DOD concurred with all of our recommendations and cited some actions that it was taking to address the issues we identified in this report. DOD’s comments are reprinted in appendix IV. Technical comments were provided separately from DOD, State, and the U.S. Coast Guard and incorporated as appropriate. USAID chose not to provide any comments. DOD concurred with our first recommendation that AFRICOM synchronize activities among AFRICOM’s components by expediting the completion of its supporting plans and developing a process whereby plans are reviewed on a recurring basis. In its response, the department stated that, in the absence of supporting plans, AFRICOM conducts weekly meetings at which its components and the Horn of Africa task force discuss the status of current activities and future events. The department added that AFRICOM uses an information database to manage events conducted by the command and its components. We noted these efforts in our report, and we agree that it is a good practice for AFRICOM to coordinate with its components through weekly meetings and an information database. However, as our report states, component officials have noted that within AFRICOM the use of the database is preliminary, that the database may not include all component activities, and that coordinating defense efforts in Africa remains a challenge. Furthermore, DOD stated in its response that regional engagement plans and component support plans are in the final stages of review and approval by AFRICOM’s leadership, and will be used by the staff and components to guide and synchronize activities even though the plans have not been formally approved. The department added that country work plans are being developed for the command’s critical partners as identified in the theater campaign plan. However, the department’s response did not include a specific time frame for completion of AFRICOM’s plans. Such a time frame is critical, given that AFRICOM has repeatedly postponed the completion of several of its supporting plans. Until AFRICOM finalizes and approves its plans, AFRICOM risks conducting activities that do not fully support its mission and may hinder a unity of effort among its components. DOD also concurred with our second recommendation that AFRICOM conduct long-term assessments of the full range of its activities. The department stated that its Horn of Africa task force is now required to report on the effectiveness of its activities—which we note in our report. Moreover, the department stated that all AFRICOM operations and planning orders now include tasks to staff and components to develop metrics and indicators and to conduct assessments; however, we were not provided copies of these documents during our review. If these actions are implemented in a comprehensive manner such that they require long-term evaluation of all AFRICOM activities, they have the potential to provide the command with valuable information on whether its activities are having their intended effects or whether modifications are needed. Completing thorough long-term assessments of its activities will aid in the command’s efforts to make successful future planning decisions and allocate resources to maximize its effect in Africa. DOD also concurred with our third recommendation that AFRICOM take actions to ensure that its components’ and Offices of Security Cooperation’s budget personnel have the appropriate expertise and knowledge to make timely and accurate funding decisions for activities. DOD fully agreed with us regarding with the need to improve the use of security cooperation tools through training, staff changes, and better guidance. DOD further stated that while AFRICOM has Title 10 authorities to conduct traditional military activities and operations, the activities that are most important to the department in Africa center around building institutional and operational security capacity and that most of the authorities and funding for these activities belong to State Department programs under Title 22 authorities. In our report, we acknowledge AFRICOM’s reports of having access to several funding sources, as well as influence over some State and USAID funding sources, and that many different funding sources may be required for an activity. We also note in our report that DOD, in its 2010 Quadrennial Defense Review, stated that U.S. security assistance efforts are constrained by a complex patchwork of authorities. We maintain that, given the challenges associated with applying various funding sources to activities in Africa, AFRICOM should identify and complete specific actions—such as training, staffing changes, and/or comprehensive guidance—to increase understanding among its budget staff and institutionalize knowledge throughout the command. DOD also concurred with our fourth recommendation that AFRICOM fully integrate interagency personnel and partners into the formative stages of the command’s activity planning processes to better leverage interagency expertise. The department noted that AFRICOM is unique in that, in addition to a military deputy commander, it has a Deputy Commander for Civil-Military Activities—a senior Foreign Service Officer of ambassadorial level who helps ensure that policy/program development and implementation include interagency partners and are consistent with U.S. foreign policy. In our report, we highlighted the civilian deputy as a positive example of AFRICOM’s efforts to integrate interagency personnel into the command. DOD also noted that it continues to pursue qualified interagency representatives to work in management and staff positions at AFRICOM, will work with its partners to prepare personnel for assignment in a military organization, and encourages interagency partners to fill vacant positions and reward their detailees for taking assignments at AFRICOM. Our review highlights some efforts AFRICOM has taken to integrate its interagency partners into command planning and operations—such as developing a training and orientation program for embedded interagency personnel. We also state in our report that staffing shortages at other federal agencies reduce those agencies’ ability to send additional staff to AFRICOM. DOD’s response does not indicate how AFRICOM intends to better integrate interagency personnel into the formative stages of activity planning, which would help AFRICOM better leverage interagency expertise and promote a U.S. government unity of effort in Africa. Finally, DOD concurred with our fifth recommendation that AFRICOM develop a comprehensive training program on working with interagency partners and African cultural issues. DOD noted that AFRICOM has developed cultural awareness training for all incoming headquarters personnel, which is mandatory and tracked. We include in our report that AFRICOM told us it allots 2 hours to Africa cultural awareness during the first day of training for new command staff. However, since presentations are given at the beginning of tours, when personnel are also learning about their new assignments and daily operations, we believe that it is unlikely that this constitutes comprehensive, effective training. The department also stated that AFRICOM’s Horn of Africa task force personnel receive Web-based and in-country training as part of newcomers’ orientation. As we note in our report, we reviewed the task force’s briefing on East African culture and found it to be extensive and a positive step toward training personnel. Furthermore, DOD stated that key personnel attend training for working with embassies; however, the department did not identify which personnel attend the training and what opportunities are available for those who do not attend it. Additionally, DOD did not address how AFRICOM would mandate staff participation in any training it develops. Until AFRICOM provides training or guidance to its staff on working with interagency partners and cultural issues in Africa, the command risks being unable to fully leverage resources with U.S. embassy personnel, build relationships with African nations, and effectively carry out activities. We are sending copies of this report to the Secretary of Defense; the Secretary of Homeland Security; the Secretary of State; and the Administrator, United States Agency for International Development. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3489 or at pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Led by Africa Command’s (AFRICOM) Navy component, the mission of the Africa Partnership Station is to build maritime safety and security capabilities with African nations. Training is typically conducted aboard a ship, moving between ports to offer training at sea and ashore with African partners. Africa Partnership Station training events focus on a broad range of areas, including maritime domain awareness, leadership, navigation, maritime law enforcement, search and rescue, civil engineering, and logistics. Crew members also participate in humanitarian assistance efforts focusing on health care, education, and other projects in local communities, which may involve participation by other federal agencies including the Department of State (State) and U.S. Agency for International Development (USAID). AFRICOM’s Navy component coordinates with other AFRICOM components to conduct Africa Partnership Station activities, including the Marine Corps component and the Combined Joint Task Force–Horn of Africa; interagency partners such as the U.S. Coast Guard, State, and USAID; and participants from over 22 countries from Europe, Africa, and South America. Figure 7 shows a few of the Africa Partnership Station activities. The Africa Partnership Station activity began under U.S. European Command and was transferred to AFRICOM upon reaching full operational capacity. As of May 2010, there have been 14 Africa Partnership Station deployments, including a deployment of vessels from the Netherlands and Belgium. Table 3 identifies Africa Partnership Station ships, deployment dates, and countries visited. In July 2009, we observed the main planning conference for the USS Gunston Hall, which was scheduled to conduct Africa Partnership Station activities from February through May 2010. After an initial diversion to Haiti for disaster relief support, the USS Gunston Hall arrived in West and Central Africa in March 2010. The Africa Partnership Station deployment used a “hub” approach, such that the USS Gunston Hall conducted operations out of ports in two countries—Ghana and Senegal. Members from various African nations were brought to these two hubs to receive training. Specific Africa Partnership Station activities on the USS Gunston Hall included maritime workshops and seminars on small boat operations, maritime law enforcement boarding, maritime domain awareness, and fisheries management and maritime meteorology. Additional activities included a maritime safety and security forum with key maritime stakeholders; military-to-military training led by AFRICOM’s Marine Corps component; a strategic communications forum; medical outreach to local clinics conducted by a 20-person medical team, which reported seeing over 3,000 patients; several performances by the U.S. Sixth Fleet’s five piece brass band; delivery of humanitarian assistance supplies; and several construction/refurbishing projects at local schools and clinics. Natural Fire 10 was an exercise led by U.S. Africa Command’s (AFRICOM) Army component to train U.S. forces and build the capacity of East African forces to provide humanitarian aid and disaster response. Natural Fire began under U.S. Central Command and was transferred to AFRICOM upon its establishment. Prior to 2009, three previous Natural Fire exercises had been carried out. Natural Fire 10, which was conducted in October 2009 at various sites in Uganda, focused on disaster response to an outbreak of pandemic influenza. AFRICOM officials told us that Natural Fire 10 included approximately 550 U.S. personnel and 650 participants from five East African countries: Burundi, Kenya, Tanzania, Rwanda, and Uganda. The exercise consisted of three parts: Field exercises: a 7-day military-to-military activity which included exercising forces on convoy and humanitarian civic assistance operations; weapons handling and helicopter familiarization; weapons fire; hand-to- hand combat; crowd and riot control; and entry control point and vehicle checkpoints. Tabletop: focused on strengthening the capacity of five East African militaries to prepare and respond to a potential pandemic outbreak in their countries. The exercise consisted of 2 days of academic sessions, during which officials from various organizations gave presentations about pandemic preparedness and response. The academic sessions were followed by 2 days of pandemic scenarios for which participants were divided into three groups—civil authorities, military, and international community—to develop and act out their responses. Humanitarian civic assistance: included medical assistance events, dental assistance events, and engineering projects such as a school and hospital reconstruction. In addition to the efforts by AFRICOM’s Army component, other components also contributed to Natural Fire 10. Specifically, the Navy component oversaw construction of the camp hosting the field exercise and led humanitarian civic assistance engineering projects. The Air Force component led the medical programs. The Marine Corps component supported weapons training during the field exercise. AFRICOM’s Horn of Africa task force oversaw photography and public affairs. Additionally, interagency partners and international organizations were involved in the tabletop portion of the exercise. For example, the U.S. Agency for International Development partnered with AFRICOM in developing the pandemic influenza focus for the tabletop activity, and international organizations such as the United Nations, World Health Organization, and International Red Cross led academic training sessions. In conducting our work, we reviewed a wide range of Department of Defense (DOD) and command guidance and other guidance including DOD strategies; U.S. Africa Command (AFRICOM) theater strategy, theater campaign plan, and 2009 and 2010 posture statements; and AFRICOM’s military service component and task force’s priorities, draft strategic plans (if available), and engagement plans. We met with AFRICOM officials in Stuttgart, Germany, in June 2009 and held follow-up meetings in December 2009. We also met with officials at the European headquarters of AFRICOM’s military service components (Army Africa, Naval Forces Africa, Air Force Africa, and Marine Corps Africa) and special operations command in June and July 2009. In July 2009 we also observed the main planning conference for the Africa Partnership Station, a maritime safety and security activity led by Navy Africa and sponsored by AFRICOM. We traveled to Uganda, Ethiopia, and Djibouti in October 2009 to observe U.S. military operations, interview officials at the Combined Joint Task Force–Horn of Africa, and meet with U.S. embassy officials. We chose to visit Uganda to observe the AFRICOM-sponsored, U.S. Army Africa–led Natural Fire 10 exercise, AFRICOM’s largest exercise in Africa for 2009; Ethiopia, due to its proximity to Djibouti and large amount of task force civil-affairs team activity proposals; and Djibouti, due to the location of the task force at Camp Lemonnier. As part of our review of AFRICOM’s task force, in January 2010 we observed and obtained documentation from an academic training and mission rehearsal exercise for incoming task force staff in Suffolk, Virginia. Additionally, we interviewed DOD officials at the Office of the Secretary of Defense, Joint Staff, and the Defense Security Cooperation Agency. We also reviewed non-DOD documents to determine how AFRICOM’s strategies compared or aligned with the strategies of other government partners, including the fiscal years 2007–2012 Department of State /U.S. Agency for International Development Joint Strategic Plan; USAID Strategic Framework for Africa; and fiscal year 2008, fiscal year 2009, and fiscal year 2010 mission strategic plans of 12 U.S. embassies in Africa. We interviewed officials at the Department of State (State), the U.S. Agency for International Development (USAID), and the Coast Guard to obtain other federal agencies’ perspectives on AFRICOM’s process of planning and implementing activities, including the command’s considerations of interagency perspectives. We spoke with officials from State and USAID due to their relationship with DOD in supporting U.S. foreign policy objectives, and we met with officials from the Coast Guard due to their relationship with AFRICOM in its maritime activities. We met with U.S. embassy officials in Uganda, Ethiopia, and Djibouti, and we contacted 20 additional embassies throughout Africa: Algeria, Botswana, Burundi, Chad, Comoros/Madagascar, Democratic Republic of Congo, Eritrea, Ghana, Kenya, Liberia, Mauritius/Seychelles, Morocco, Mozambique, Nigeria, Rwanda, Senegal, South Africa, Sudan, Tanzania, and Yemen. We chose to contact these specific embassies based on several factors including that they were in countries that coordinate with AFRICOM’s task force; their involvement with the two activities we observed in detail, Africa Partnership Station and Natural Fire 10 (see below); and their geographical dispersion to ensure that various regions were represented across Africa. When multiple countries met our criteria, we gave preference to U.S. embassies located in countries that were identified by DOD officials or in documents as important countries for AFRICOM. In addition, we met with an organization that represents U.S.-based international nongovernmental organizations that conduct work in Africa, as well as some African government and African military officials, to obtain their viewpoints on AFRICOM’s activities. We observed two AFRICOM activities in depth to complement our broader review of the command’s activities at the interagency and command levels. These two activities were: Africa Partnership Station (a maritime safety and security activity) and Natural Fire 10 (part of AFRICOM’s pandemic preparedness and response initiative). In choosing which of AFRICOM’s over 100 activities to review as illustrative examples, we first narrowed the activities to 30 main activities that support AFRICOM in achieving its theater strategic objectives, as identified by AFRICOM officials. We then chose to review the Africa Partnership Station and Natural Fire 10 activities due to factors such as their addressing of different theater security objectives, timeliness to our review, leadership by different military service components, considerable involvement of interagency and international partners, size of the activities, and distinct geographic locations. To review the Africa Partnership Station, we observed the activity’s main planning conference in New York, New York, in July 2009; reviewed documentation including reports and assessments; and spoke to officials at DOD, AFRICOM, U.S. Navy Africa, Coast Guard, State, and USAID, as well as nongovernmental organizations and African military officials. To review Natural Fire 10, we observed the Natural Fire 10 exercise in Uganda in October 2009; reviewed documentation including guidance, plans, reports, and assessments; and spoke to officials at DOD, AFRICOM, U.S. Army Africa, State, and USAID, as well as African military officials, about the activity. These two activities serve as examples, and information about them is not meant to be generalized to all AFRICOM activities. We supplemented our examination of the Africa Partnership Station and Natural Fire 10 with information on additional activities highlighted by AFRICOM, AFRICOM’s military service components and task force, DOD, State, and USAID officials during our review, as well as by two GAO reports that addressed AFRICOM activities: one that examined the Trans-Sahara Counterterrorism Partnership, Operation Enduring Freedom–Trans Sahara, and one that partially reviewed the Global Peace Operations Initiative and Africa Contingency Operations Training and Assistance activities. Operation Enduring Freedom–Trans Sahara is designed to strengthen the ability of regional governments to police large expanses of remote terrain in the Trans-Sahara. Management portal; Force Allocation Decision Framework; Chairman, Joint Chiefs of Staff Instruction 7401.01E on the Combatant Commander’s Initiative Fund; and AFRICOM training presentations. We spoke with officials at AFRICOM, its military service components, special operations command, and task force about their respective strategic planning efforts. To examine AFRICOM’s assessment of activities, we reviewed a presentation of AFRICOM’s strategic assessment tool as well as activity assessment requirements in the command’s theater campaign plan and the task force’s draft regional engagement plan. We spoke with officials at DOD, AFRICOM, AFRICOM’s components, U.S. embassies, and other federal agencies to assess whether the command’s activities support AFRICOM’s mission and reflect the most effective use of resources. In examining funding for activities, we reviewed AFRICOM’s funding sources as well as the available funding for the Africa Partnership Station and Natural Fire 10 activities. We also reviewed a GAO report that examined the use of funds under the programs authorized in Sections 1206 and 1207 of the National Defense Authorization Act for Fiscal Year 2006. AFRICOM provided data on the funding amounts for its activities in fiscal year 2009, which were drawn from the Standard Army Finance Information System. We assessed the reliability of the finance information system through interviews with personnel responsible for maintaining and overseeing these data systems. Additionally, we assessed the quality control measures in place to ensure that the data are reliable for reporting purposes. We found the funding amount data reported by AFRICOM to be sufficiently reliable for the purposes of this review. To review efforts at interagency collaboration and building expertise, we examined agreements between AFRICOM and interagency partners, training guidance, and training programs. We spoke with interagency partners embedded at AFRICOM, at U.S. embassies in Africa, and at other federal agency offices. We conducted this performance audit from April 2009 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Marie Mak, Assistant Director; Kathryn Bolduc; Alissa Czyz; Robert Heilman; Lonnie McAllister; James Michels; Steven Putansu; Jodie Sandel; Erin Smith; and Cheryl Weissman made major contributions to this report. | When the U.S. Africa Command (AFRICOM) became fully operational in 2008, it inherited well over 100 activities, missions, programs, and exercises from other Department of Defense (DOD) organizations. AFRICOM initially conducted these inherited activities with little change. However, as AFRICOM has matured, it has begun planning and prioritizing activities with its four military service components, special operations command, and task force. Some activities represent a shift from traditional warfighting, requiring collaboration with the Department of State, U.S. Agency for International Development, and other interagency partners. GAO's prior work has identified critical steps and practices that help agencies to achieve success. For this report, GAO was asked to assess AFRICOM in five areas with respect to activity planning and implementation. To do so, GAO analyzed DOD and AFRICOM guidance; observed portions of AFRICOM activities; interviewed officials in Europe and Africa; and obtained perspectives from interagency officials, including those at 22 U.S. embassies in Africa. AFRICOM has made progress in developing strategies and engaging interagency partners, and could advance DOD's effort to strengthen the capacity of partner nations in Africa. However, AFRICOM still faces challenges in five areas related to activity planning and implementation. Overcoming these challenges would help AFRICOM with future planning, foster stability and security through improved relationships with African nations, and maximize its effect on the continent. (1) Strategic Planning. AFRICOM has created overarching strategies and led planning meetings, but many specific plans to guide activities have not yet been finalized. For example, AFRICOM has developed a theater strategy and campaign plan but has not completed detailed plans to support its objectives. Also, some priorities of its military service components, special operations command, and task force overlap or differ from each other and from AFRICOM's priorities. Completing plans will help AFRICOM determine whether priorities are aligned across the command and ensure that efforts are appropriate, complementary, and comprehensive. (2) Measuring Effects. AFRICOM is generally not measuring long-term effects of activities. While some capacity-building activities appear to support its mission, federal officials expressed concern that others--such as sponsoring a news Web site in an African region sensitive to the military's presence--may have unintended effects. Without assessing activities, AFRICOM lacks information to evaluate their effectiveness, make informed future planning decisions, and allocate resources. (3) Applying Funds. Some AFRICOM staff have difficulty applying funding sources to activities. DOD has stated that security assistance efforts are constrained by a patchwork of authorities. Limited understanding of various funding sources for activities has resulted in some delayed activities, funds potentially not being used effectively, and African participants being excluded from some activities. (4) Interagency Collaboration. AFRICOM has been coordinating with partners from other federal agencies. As of June 2010, AFRICOM had embedded 27 interagency officials in its headquarters and had 17 offices at U.S. embassies in Africa. However, the command has not fully integrated interagency perspectives early in activity planning or leveraged some embedded interagency staff for their expertise. (5) Building Expertise. AFRICOM staff have made some cultural missteps because they do not fully understand local African customs and may unintentionally burden embassies that must respond to AFRICOM's requests for assistance with activities. Without greater knowledge of these issues, AFRICOM may continue to face difficulties maximizing resources with embassy personnel and building relations with African nations. GAO recommends that AFRICOM complete its strategic plans, conduct long-term activity assessments, fully integrate interagency personnel into activity planning, and develop training to build staff expertise. DOD agreed with the recommendations. |
GSA is the central management agency for acquiring real estate for federal agencies. According to a GSA policy official, GSA is responsible for managing the acquisition of about 40 percent of the federal government’s office space and 10 percent of all government space. Other agencies, such as DOD, have their own authority to acquire space. To acquire real estate, an agency must either go through GSA using GSA’s statutory authority, use its own statutory authority, or obtain delegated authority from GSA. If it is using GSA, the agency must provide GSA with a "delineated area," the geographic area where the agency wants to be located. GSA’s policy requires its staff to review each delineated area to confirm its compliance with all applicable laws and regulations. Once an agency has selected a delineated area, GSA, under CICA, is to acquire the site within the selected area through the use of full and open competitive procedures. If an agency acquires property independently of GSA using its own statutory authority, it is responsible for compliance with all relevant laws and regulations but is not subject to GSA regulations. In 1990, we were asked by Senator Kent Conrad to look at policies that guide civilian agencies in selecting facility locations and determine whether any changes in federal location policies were warranted. We reported that GSA needed to develop a more consistent and cost-conscious governmentwide location policy that required agencies, in meeting their needs, to maximize competition and select sites that offer the best overall value, considering such factors as real estate and labor costs. Since 1990, at least two matters raised in that report have remained unchanged. First, GSA has not developed for congressional consideration the cost-conscious and consistent governmentwide location policy that we recommended. The second item that remains unchanged is that rents in the CBAs of federal regional cities and Washington, D.C., are generally higher than the rents in non-CBA sections of those same cities—an average of $4.03 per square foot higher within calendar year 1999, as shown in table 1. According to an April 2001 GSA congressional testimony, high rents for class A commercial space in San Francisco, CA, caused three federal agencies to move from leased space in San Francisco to leased space in Oakland, CA, where rates were 25 percent to 30 percent lower. One change that occurred since our 1990 report that affected the workplace is the surge in telecommunications services, including widespread access to the Internet. One result of telecommunications services is the practice of “telecommuting,” whereby employees can work from home or remote offices for all or part of their work week. Telecommuting increased significantly, rising from a level of 4 million U.S. workers in 1992, according to the Department of Transportation, to 16.5 million in 2000, according to the International Telework Association and Council. Despite the continuing relative higher cost of urban commercial rents, federal employment generally remains focused in Metropolitan Statistical Areas (MSA), as shown in table 2. During fiscal years 1998 through 2000, agencies chose urban areas for about 72 percent of the 115 acquired federal sites in our survey and selected rural areas (those with a population of 25,000 or less) for about 28 percent of the sites. Agencies reported that mission was the primary factor used to determine the location for over one-half of the sites and that the mission dictated the need to be in close proximity to clients, other agency facilities, and related organizations. GSA conducted the acquisitions for 79 of the sites using GSA authority, and agencies using their own statutory authority conducted the acquisitions for the other 36 sites. Agencies selecting urban sites reported that close proximity to other agency facilities and organizations contributed to cost savings resulting from less travel, more prompt on-site support, and ease in technology sharing. Other benefits reported for urban sites included the availability of a skilled labor pool and accessibility to public transportation for both employees and agency clients. Agencies that chose rural sites reported some similar benefits, stating that close proximity to related or support agency facilities and proximity to industries with which the agency is connected resulted in more efficient use of agency resources and less travel. Other benefits reported for rural sites included better building and data security and improved access to major transportation arteries. Officials reporting for about 66 percent of the sites either said no problems existed at the sites (45 percent), or they did not respond to the survey question (21 percent). For the remaining sites, agencies selecting urban areas reported problems such as lack of secure buildings, lack of expandable space, and high rental rates. Agencies selecting rural areas reported problems such as lack of infrastructure for high-speed telecommunications and a lack of access to public transportation. Functions performed at the sites varied, and some functions were performed in both urban and rural areas. Eighty-three (or 72 percent) of the sites in our survey were located in urban communities (areas with a population above 25,000), and 32 (or 28 percent) were located in rural areas (areas with a population of 25,000 or below). Most of the 115 site selections involved relocations within existing communities (56) or expansions of existing sites (14). As table 3 shows, the number of newly established locations (locations for agency functions for which the agency neither relocated nor expanded an existing site) was almost evenly distributed between rural and urban areas. Functions at the six rural sites selected for newly established locations included storage/inventory (mainly Census Bureau material for the 2000 Census), air traffic control, and law enforcement. The seven urban areas selected for the newly established locations included functions such as document archiving, passport production, law enforcement, and inspection of diseased plants near plant quarantine areas. As table 3 shows, of the 32 sites that were relocated from one community to another community, 18 were in urban areas and 14 were in rural areas. Among these relocated sites, law enforcement and administrative program management were the most prevalent functions; and the two functions were about evenly divided between urban and rural sites. However, the finance and accounting and research and development functions were found only at rural sites. Functions at the urban relocated sites included inspecting/auditing, tax administration, and aviation operations. An agency can use GSA to acquire property on its behalf or acquire the property independently, using either statutory authority or authority delegated by GSA. No major difference existed in the percentage of urban sites selected, regardless of whether the site decisions were made by agencies working with GSA or made independently of GSA. About 71 percent of the sites that GSA procured on behalf of agencies were in urban areas, and about 75 percent of the sites agencies selected independently of GSA were in urban areas. From a list of 12 factors (and an overall “others” category) in our survey, agencies reported that they considered numerous factors to determine the delineated area for the sites in our survey. As shown in table 4, agencies considered mission in making location decisions for 82 of the sites in our survey. The next most-cited factors were transportation efficiencies, which was considered for 46 sites; and particular space needs, such as specialized floor layouts, which was considered for 45 sites. In their discussion as to why agency mission was the primary factor for site selections, agencies most often cited the need for the site to be in close proximity either to the mission service area, other agency facilities, other government agencies, or related private sector organizations. For example: The U.S. Customs Service (Customs) reported that it chose the delineated area for its international mail inspection function in Carson, CA, because the U.S. Postal Service (USPS) had relocated its international mail operations to Carson, CA, and Customs needed its inspection function to be near the international mail site. Customs also reported that its cybersmuggling center needed to be located within the concentration of private computer-based industries in Fairfax, VA. U.S. Attorney offices reported that their policy is to be within four blocks of federal courthouses because, as the principal litigators for the U.S. government, U.S. Attorneys need to be available for courtroom activities on a regular basis. U.S. Marshals Service (USMS) offices also reported that USMS offices need to be colocated with the courts because the agency’s primary concern is the safety and security of the judiciary, the judicial process, and its participants. The Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) reported that the delineated areas of its sites in our survey were selected because they needed to be in close proximity to diseased plant quarantine areas. The Federal Emergency Management Agency also reported that its Pasadena, CA, site in our survey needed to be in close proximity to a disaster area. The Immigration and Naturalization Service reported that its national records center had to be located as close as possible to a National Archives and Records Administration (NARA) center in Lees Summit, MO, to reduce the costs associated with a high-volume records transfer to NARA. Agencies providing services to the public, such as the Social Security Administration (SSA), IRS, and the Department of Veterans Affairs, reported that the delineated areas for their offices/clinics were selected because the agencies needed to be as close as possible to the client/patient population that the agencies service. Agencies did not select rural areas for 83 of the 115 sites in our survey. For about 75 percent of the 83 sites, agencies cited mission requirements as the reason for not selecting rural areas. They again cited proximity considerations and said that locating the sites in rural areas would have placed them too far from their clients, other supporting agency facilities, related research facilities, or the function they had to monitor. For example, GSA’s Federal Technology Service reported that it did not consider a rural area for its site because the function needed to remain in the Washington, D.C., area to have access to its major customers and telecommunications providers. Other reasons for not selecting rural areas included the need to be near public transportation and rural areas’ lack of the necessary labor pool and sufficient space. Some respondents also said that rural areas can have high costs, such as for transportation to airports. The Bureau of Reclamation stated that a water resources management operation was not located in a rural area because of the unavailability of a building to meet space needs. The IRS said it did not place a telephone−based customer service site in a rural area because of the need for a large number of recruitment candidates who were available only in a more heavily populated area. Similarly, the SSA said that when one of its teleservicing centers needed additional space, it did not move the center to a rural area because of the difficulty of locating sufficient space and personnel in rural areas. In addition to agency mission, lower real estate cost was one of three main factors that contributed to the selection of the 32 rural sites. Respondents representing almost one-half of the rural sites identified (1) lower real estate costs; (2) particular space needs (e.g., specialized space for security reasons); and (3) transportation efficiencies, such as access to major arteries, as factors considered in the site selection decisions. In response to our survey’s request to list three chief benefits and problems, if any, associated with the selected location for sites in our survey, agencies reported numerous benefits and few problems for both urban and rural locations. The benefits of urban areas included close proximity to other agency resources, such as support facilities and related government agencies, and related private sector organizations. Agencies said proximity was a benefit because it contributed to more prompt on-site support and cost savings resulting from less travel and transportation of material over distances and eased technology sharing and daily interaction among related organizations. For example, both the Forest Service and APHIS reported that their sites’ close accessibility to universities allowed for sharing of advanced technologies and improved collaboration between the agency and university researchers. Also, the Department of Veterans Affairs reported that the urban location of one of its clinics was a benefit because it was in close proximity to the city’s medical center complex. Other benefits cited for sites in urban areas included the availability of a skilled labor force, the ability to use existing infrastructure, and the accessibility of public transportation for both employees and clients. Agencies that located sites in rural areas reported some similar benefits, such as close proximity to related or support facilities, other program employees, and the industry to which the agency was connected. They said proximity resulted in more efficient use of agency resources and less travel. Other benefits reported for rural sites included improved building and data security and accessibility to major transportation arteries. Agencies reported they had no problems with about 45 percent of the sites, either urban or rural. They provided no response to this survey question for another 21 percent of the sites. For the remaining sites, agencies selecting urban areas reported problems such as lack of secure buildings and expandable space, traffic problems, high rental rates, and specific problems with buildings needing repairs. Agencies selecting rural areas reported problems such as a lack of proximity to other agency facilities and public transportation, great distance from major airports, and a lack of necessary infrastructure for telecommunications and city waste management services. As table 5 shows, agencies reported that the three most common functions located in rural areas included law enforcement, research and development, and supply storage and inventory control. Three functions, automated data processing, finance and accounting, and social services, were located only in rural areas. The three most common functions located in urban areas were law enforcement, administration of loans/grants/benefits and processing of applications and claims, and administration/program management. Law enforcement was the most prevalent function in both urban and rural areas, although it was more prevalent in urban areas. Also, although research and development and supply storage and inventory functions were more prevalent in rural areas, sometimes they were also located in urban areas. Several laws and executive orders affect the location of federal facilities. The laws, which take priority over the executive orders, include RDA, the primary law on rural siting; and CICA, a law governing federal acquisition generally. When considering areas in which to locate, RDA "directs the heads of all executive departments and agencies of the Government to establish and maintain departmental policies and procedures giving first priority to the location of new offices and other facilities in rural areas.” Any move by an agency to new office space in another location would be considered a new office or facility covered by RDA. Once agencies have selected their respective areas for possible locations, CICA generally requires that agencies obtain full and open competition for facilities acquisitions within the areas selected. The two primary executive orders on federal facility location decisions are Executive Order 12072 of August 16, 1978; and Executive Order 13006 of May 21, 1996. Executive Order 12072 specifies that when the agency mission and program requirements call for facilities to be located in urban areas, federal agencies must give first consideration to locating in a CBA and adjacent areas of similar character. Executive Order 13006 requires the federal government to utilize and maintain, wherever operationally appropriate and economically prudent, historic properties and districts, especially those located in the CBA. Agencies acquiring real estate are responsible for complying with federal laws and executive orders. If GSA is acquiring the real estate for an agency, then GSA regulations state that GSA is responsible for ensuring compliance with “all applicable laws, regulations, and Executive orders." However, if the agency is making the acquisition under its independent statutory authority, or through a delegation from GSA, the agency is responsible for compliance with relevant laws and regulations. Some agencies also have been provided statutory authority to acquire real estate for different purposes. Some agencies such as the Tennessee Valley Authority (TVA) have been provided broad authority. TVA is authorized to purchase or lease real property that it deems necessary or convenient in transacting its business. Other agencies’ statutory authority is for more limited purposes. For example, the Secretary of the Interior is authorized to lease buildings and associated property for use as part of the National Park System and the Secretary of the Treasury is authorized to lease space for the storage of unclaimed or other imported merchandise that the government is required to store. RDA states that executive departments and agencies must establish policies and procedures to give first priority to the location of new offices and other facilities in rural areas. However, among the 13 cabinet departments, only the departments of Agriculture (USDA), Commerce, Labor, Transportation, and the Treasury had written policies specifically addressing RDA. The other departments (Justice, Health and Human Services, the Interior, Housing and Urban Development, State, and Education) said they did not have policies on RDA; and two (Energy and Veterans Affairs) said they expect all employees to abide by all policies on facility acquisitions, but they also had no written policies regarding RDA. In addition, many agency real estate specialists in field offices also said either their agencies did not have RDA policies or they did not know if their agencies had such policies. Among the 113 sites for which we received responses, 61 sites involved agencies that did not have RDA policies, and 24 involved agencies that had policies. Respondents for 28 sites also said that they did not know if their agencies had RDA policies. Our survey also requested respondents to report which of the four applicable laws and executive orders were considered in the acquisition of the surveyed sites. Agencies reported that CICA was considered for 73 percent of the 113 sites for which a Executive Order 12072 (on locating in CBAs) was considered for 50 percent of the 113 sites for which a response was received, Executive Order 13006 (on historic districts) was considered for 43 percent of the 112 sites for which a response was received, and RDA was considered for about 27 percent of the 113 sites for which a response was received. Agencies reported that they considered RDA for 8 of the 36 sites that were acquired independently of GSA. Agencies also reported that RDA was considered for 21 of the 79 sites acquired by GSA. Conversely, for about 73 percent of 113 sites for which a response was received, respondents said they either did not use RDA in site acquisitions or did not know whether it was used. To determine if GSA was requiring agencies to apply RDA, we looked at GSA regulations and examined 33 GSA lease files. GSA regulations state that federal agencies using GSA are responsible for identifying their delineated areas, consistent with their missions and program requirements in accordance with applicable regulations and statutes, including RDA. The agencies must also submit to GSA a written statement explaining the basis for their delineated areas, and GSA is responsible for reviewing these delineated areas to confirm their compliance with laws and regulations. We looked at 33 files involving GSA leases made from 1989 through 2000 in 3 GSA regions, including the Rocky Mountain Region, based in Denver; the Greater Southwest Region, based in Ft. Worth, TX; and the Mid-Atlantic Region, based in Philadelphia. We found no mention of RDA in any of the 33 acquisition files. In the files we examined, we did find cases where GSA requested modification of the delineated area in response to other criteria, such as CICA. Additionally, a GSA official in the National Capital Region (NCR) provided us with a checklist of documents that are expected to be in each NCR lease file. Neither the 1999 checklist nor the 2000 update of that list mentioned RDA, although both mentioned Executive Orders 12072 and 13006. In addition to agencies’ limited consideration of RDA, the act’s definition of "rural" is unclear. RDA provides that rural areas, for the purpose of federal facilities location decisions, are defined in the private business enterprise exception in section 1926(a)(7) of title 7 of the U.S. Code. Prior to 1996, this exception in 7 U.S.C. § 1926(a)(7) defined rural as “all territory of a State that is not within the outer boundary of any city having a population of fifty thousand or more and its immediately adjacent urbanized and urbanizing areas with a population density of more than one hundred persons per square mile, as determined by the Secretary of Agriculture according to the latest decennial census of the United States: Provided, that special consideration for such loans and grants shall be given to areas other than cities having a population of more than twenty five thousand.” Government agencies have different definitions of what constitutes a rural area. For example, GSA uses two different population thresholds to define rural area for purposes of RDA. According to GSA Interim Rule D-1, a rural area is any area "that (i) is within a city or town if the city or town has a population of less than 10,000 or (ii) is not within the outer boundaries of a city or town if the city or town has a population of 50,000 or more and if the adjacent urbanized and urbanizing areas have a population density of more than 100 per square mile." Meanwhile, as table 6 shows, other federal agencies use other definitions of rural to implement various federal programs; and private organizations use other definitions as well. According to our study and our consultant’s review, location factors considered important by the private sector for minimizing costs might benefit the public sector. These factors are (1) incentives offered by localities to attract new employers, such as free land; and (2) the lower real estate, labor, and operational costs available in some areas. The private sector cited these two factors as having influenced their location decisions more frequently than did the federal agencies in our survey. We recognize that federal agencies’ missions may sometimes preclude them from taking advantage of the savings represented by these factors. However, in instances where an agency has flexibility in locating a function, the agency may be able to take advantage of one or both of these factors, so long as they are not offset by other higher operational costs. According to our consultant, there are two broad steps involved in office location decisions made by the private sector. The first step is to determine whether a given location is functionally suited to achieve the purposes of the office that is to be located. The second step is to test the location for its ability to meet a range of factors that have been shown to be important in meeting required goals. Our consultant found that corporations strongly preferred urban locations over rural ones. The determining location factor for most companies, he said, derives from a specific location’s characteristics. The private sector considers numerous factors in making location decisions, and the relative importance of the factors appears to be company specific. However, our consultant’s literature search and survey of 52 private sector companies identified several factors as the main areas of consideration in the private sector location decisionmaking process. They were (1) transportation and logistics, (2) labor availability and cost, (3) real estate costs, and (4) business climate and business incentives. Of these factors, some were location factors considered by the federal sites we surveyed and some were not. Access to highways and major thoroughfares is important for employees who commute and essential to maintaining connections to companies’ suppliers and customers. When asked to rate the importance of transportation and logistics, 17 of the 52 respondents in our consultant’s survey gave it the highest rating for headquarters offices, and over one-half gave it the highest rating for satellite (field) offices. With the increasing globalization of markets, easy access to airports is also very important in corporate location decisions. Professional services, such as those of accountants and lawyers, also increasingly require access to airports, our consultant said. Transportation factors were also important to the public sector. In our survey of federal agency sites, officials for 40 percent of the sites said access to transportation, such as airports, trains, and highways, was an important factor in their location decisions. Examples cited by the agencies included easy access to airports for trainees from around the nation and access to highways for service centers. According to our consultant, the availability and cost of labor are among the most important location factors for the private sector. Most of the corporations responding to our consultant’s survey rated these among the top location factors considered when locating either their headquarters or field offices. Asked to rate the importance of "availability and cost of labor supply,” of the 52 survey respondents 30 gave it the highest rating for headquarters offices and 32 gave it the highest rating for satellite (field) offices. Our consultant emphasized that the availability of sufficient and qualified labor is crucial to any business location decision because, even in a low-wage area, the need to train a qualified workforce can wipe out savings from lower labor costs. Our consultant also stated that while many small towns on the fringes of metropolitan areas have experienced rapid growth, their small populations suggest that they will remain small, which can be a liability to attracting companies. Labor costs include not only wage rates but also benefits, unemployment insurance, and workmen’s compensation requirements. Labor costs also include costs associated with recruitment and training and the competition for labor within the same area. Some companies try to avoid areas where they have to compete with major competitors for the same labor pool because of the possibility that skilled labor would be unavailable or that competitors would drive up the wage rate. According to our consultant, these factors are particularly important when considering rural areas where educational qualifications, in some cases, are not so readily available. Availability and cost of labor were not location factors considered by federal agencies for most of the sites in our survey. Respondents reported that they considered personnel costs, including lower labor costs and recruitment and retention costs, in location decisions for no more than 23 of the 115 sites in our survey. Whether the federal government can adopt the private sector’s practice in this area is open to question because, as previously mentioned, the primary location factor cited by federal agencies for the sites in our survey was agency mission. If agency mission dictates where most of the federal facilities have to be located, specifically in close proximity to clients, then the agency may have little flexibility to realize costs savings from low-wage areas. Since at least 1990, real estate costs have consistently ranked among the top 10 factors influencing private sector location decisions. Real estate costs include direct costs (i.e., land, building, and occupancy costs such as rent and utilities), and indirect costs (costs such as shipping, transportation, and storage). When direct costs are higher in one community than another, the addition of lower indirect costs may result in a lower overall real estate cost in the community with higher direct costs. When asked to rate the importance of real estate costs, 31 percent of the corporate survey respondents gave this factor the highest rating for corporate headquarters offices, and 63 percent did the same for corporate field offices. Real estate cost was cited less frequently by federal agencies in our survey than by the private sector in making site location decisions. Agencies reported that for about 22 percent of the sites in our survey, lower real estate cost was a factor in the decision. If the public sector adopted this private sector practice, specifically for functions where agencies have flexibility as to where they may be located, potential savings could be offered by lower real estate costs, so long as the savings are not offset by other higher operating costs. The business climate of an area, including its business incentives, is a cost factor highly important to the private sector. When asked to rate the importance of an area’s business climate, 54 percent of respondents in our consultant’s survey gave it the highest rating. Business climate factors include the general business potential and receptivity of a community to the corporate purpose. This includes the community’s economic health, its organization and preparedness for growth, and the capacity of the community to support future growth of the locating company. For example, zoning issues are critical. Similarly, business incentives, such as tax abatements, free land, or infrastructure improvements offered by a community, are indicative of the business climate and are highly important location decision factors. According to our consultant, while incentives do not replace the need for a company’s location to make good business sense, incentives become a means of distinguishing among otherwise acceptable alternatives. Business incentives were mentioned by only 2 agencies in our survey of 115 sites. USDA said it chose a site at a local university for wildlife research because Colorado State University made land available at no cost. In return, USDA agreed to work in cooperation with university students on wildlife research. The Environmental Protection Agency (EPA) said when the lease expired for its regional office in Kansas City, KS, it relocated to another location within the city because the city provided free land. EPA reported that this was a “win-win” situation for both the agency and the city because the agency saved on its real estate costs, and the office benefited the economically depressed area where it is located. The limited use of local incentives by agencies in our survey contrasts with the emphasis the private sector places on incentives to save costs. For example, functions that need not be in proximity to the public or other facilities—such as training, data processing, document distribution, or telephone-based servicing—have the potential to take advantage of local incentives. While federal agencies cannot take advantage of all local incentives, such as tax relief, they might make use of other local incentives. For example, our 1990 report, referred to earlier, noted that the Bureau of Engraving and Printing chose a site in Ft. Worth, TX, in the late 1980s in part because of incentives offered by the locality. The incentives included the donation of 100 acres of land and construction of a building, a total package valued at between $12.5 million and $15 million. Public policy may have an impact on the extent to which federal agencies may seek incentives provided by local communities. The public sector sometimes seeks to provide economic assistance to certain areas, rather than the reverse—consider how the government may benefit from an area. RDA and executive orders, such as Executive Order 12072, promote locating federal agencies in rural areas or in the central business districts of urban areas to foster their economic development. In contrast, according to our consultant, the private sector seeks the reverse—consider how the corporation may benefit from the community. The government lacks a cohesive overall location policy that requires agencies to consider costs when initially deciding whether to locate a site in a rural or an urban area. In 1990, we reported that the government was not as cognizant of cost considerations as was the private sector and that government policy was aimed at improving the economic development of either rural areas through RDA or urban areas through the executive orders. We recommended that GSA develop for congressional consideration a more consistent and cost-conscious govermentwide location policy that considers such factors as maximizing competition and taking advantage of lower real estate and labor costs in order for the government to lower its acquisition and operating costs. In 1991, GSA issued a temporary regulation requiring agencies to consider the availability of local labor pools, pay differentials for employees, local incentives offered, and real estate costs for prospectus-level projects whose missions did not dictate a geographic area. However, the requirement was eliminated in 1997 when GSA revised its location regulations. GSA officials could not explain why the requirement was deleted when we asked them in May 2001. On the basis of our latest survey of 115 sites, we found that little consideration in the site acquisition process was given to the differing costs of alternative areas. Besides finding little interest in lower real estate costs or in the use of certain local incentives, our survey also found that only 15 sites were acquired using cost analyses of alternative geographic areas, which compared costs of different areas in which a site could ultimately be selected. However, no regulation presently calls for such an analysis. In 1990, we reported that GSA, as a central management agency, had not provided leadership to assist agencies in implementing and complying with RDA. We noted that GSA had not assisted agencies in developing procedures and guidelines to implement the various location policies. Therefore, we recommended in our 1990 report that GSA develop for congressional consideration a more consistent governmentwide location policy. In our recent survey of agency sites, we found most agencies, whether they obtained the sites independently or through GSA, did not use RDA or did not know whether it was used in choosing their sites. On the basis of our survey of 115 federal facilities, the report of our consultant, and our interviews with high-ranking officials in human resources and information technology at 13 cabinet agencies, we were able to identify several federal functions that could be performed in rural areas. These included printing, archiving, accounting and finance, training, passport application processing, automatic data processing, research and development, storage, and law enforcement. Our consultant identified 21 functions that the private sector might locate in rural areas, as shown in table 7. According to our consultant, these functions lend themselves to being performed in rural areas because (1) some of them do not demand the large and technical, sophisticated labor pool often found in urban areas; (2) some functions may be performed in a location remote from the principal office’s day-to-day operations; and (3) some support functions can be performed by telephone. He also emphasized that rural areas are sometimes suitable for functions where security is important, such as research and development and law enforcement activities. We reviewed the 21 functions to see if they were represented in the federal sector and whether any of the federal agencies we contacted identified them as being found in rural areas or potential for rural areas. Nine of the 21 functions met these criteria. They were (1) accounting, (2) distribution and warehousing, (3) education and training, (4) enforcement and quality control, (5) printing and publishing, (6) records archiving, (7) data processing, (8) scientific studies and research and development, and (9) telemarketing/teleservicing. Table 8 shows the potential benefits and challenges that would result from situating the function in a rural area for the nine selected functions. Some of the federal functions in our survey were more often located at rural sites than at urban ones. These were automated data processing, finance and accounting, social services, research and development, and storage and inventory. Special space needs and low real estate cost were key factors for research and development and storage sites. The survey also asked respondents to pick 1 or more of 12 named reasons why they had chosen their locations. Survey responses regarding research and development and storage/inventory facilities that were in rural areas pointed to two factors: low real estate costs and unique space needs. For instance, officials representing four of the eight research and development sites cited their unique space needs as a reason for their sites’ selection. Of these sites, three were in rural areas. Officials representing five of the eight storage/inventory facilities also gave this reason, and four were in rural areas. Similarly, all three of the research and development sites for which officials cited low real estate costs as a reason for their site choices were located in rural areas. All five of the storage and inventory sites for which respondents cited low real estate costs as a factor were in rural areas. Information from cabinet agency officials showed that functions that had been decentralized within the agency were more likely to be found in rural areas than were centralized functions. We asked these officials, who worked in information technology or human resources, about five functions—printing, training, personnel benefits administration, procurement, and finance/payroll—and whether these or other functions could be relocated to rural areas. Officials from 11 of the 13 cabinet agencies said they had decentralized 1 or more of the 5 functions by placing it in regional or even local operating units, including those in rural areas. For instance, USDA said its training and procurement functions were decentralized to local offices, which are "in a majority of rural counties.” The Interior Department said that, except for finance and payroll, the other functions were decentralized “to the installation level,” and it has hundreds of rural installations. Four agencies reported that they had placed training in rural areas, and one, the Department of Energy, said it also had decentralized the procurement and personnel functions to local offices, half of which are in nonurban areas. However, if an administrative function was centralized it was more likely to be in an urban area. For instance, of the seven agencies that said they had centralized payroll, five said they located that function in cities, including New Orleans. The remaining two agencies placed the function in suburbs. The five agencies that centralized printing said they were doing it at an urban location—Washington, D.C. One agency that centralized its training and benefits administration said it had achieved economies of scale that it feared would be lost if any part of that centralized operation was relocated to a rural area. At least six agencies represented in these interviews identified one or more problems with rural areas. One official cited difficulty in recruiting minority employees because some rural areas tend to lack minorities. Such areas, this official said, also may pose sufficient cultural adjustments for minorities and minority employees may not wish to relocate to these areas. Other officials cited cost concerns. Officials for five agencies, for instance, said rural areas can involve personnel-related costs, such as the cost of relocating employees or of recruiting and training replacement workers. Officials from three agencies also expressed concern over the relatively higher cost of travel to rural areas, with one asserting that this made such areas poor choices for training sites. Three agencies also raised concerns about facility costs, stating that the lack of available office space in rural areas would force them to build new facilities and lose agency infrastructure investments at current locations. Three agency officials also told us that their urban operations were in those areas because of factors intrinsic to urban areas, such as the availability of public transportation and proximity to the operations of other agencies or private sector organizations. The full impact of telecommunications advancements in office location decisions is still uncertain. A widespread notion is that telecommunications advances have made the use of rural areas more viable. However, of the 11 cabinet agencies that discussed the benefits and drawbacks of rural telecommunications, only 2 agencies said telecommunications advances had made rural locations more viable. The other nine agencies expressed concern about telecommunications service in rural areas, with five saying that sophisticated telecommunications services are not always available or can be costly when they are available. Three agencies also said telecommunications is of less importance to siting decisions than other factors, and one of these expressed concern that rural telecommunications networks are inherently less secure than urban ones. On a positive note, five agencies saw telecommunications benefiting employees by, for instance, allowing benefits data and training to be offered on-line or by allowing employees to work from home or from the sites where they are conducting inspections. The private sector offered similar views. According to our consultant, although telecommunications is an increasingly important factor in location decisionmaking, its full impact has not become clear. Advanced telecommunications services are touted as leveling the playing field between small towns and metropolitan areas; however, broadband (high- speed) telecommunications facilities are not available in all areas, as noted by our consultant. He also emphasized that many small towns and rural areas lack the capital and infrastructure to facilitate these broadband services. Since our 1990 report on this issue, federal agencies continue to locate for the most part in higher cost, urban areas. Eight of the 13 cabinet agencies surveyed had no formal RDA siting policy, and there was little evidence that agencies considered RDA’s requirements when siting new federal facilities. Further, GSA has not developed for congressional consideration a cost-conscious, governmentwide location policy, as we recommended in 1990. In our survey, the sites that involved relocated operations still largely remained in urban areas, while the sites that involved newly established operations were more evenly spread over rural and urban areas. Federal agencies’ mission requirements, such as the need to be near clients or other organizations, apparently have led them to select urban areas. Other factors that led them to select urban areas are the availability of public transportation and particular space needs. A major factor that influences private sector site selection for urban areas was the availability and cost of skilled labor. Other private sector factors included real estate cost, access to transportation, and business incentives. In choosing the geographic area for a facility, the private sector more often cites cost considerations and incentives offered by states and local areas than did the federal agencies in our survey. Several government functions, such as research and development, data processing, accounting and finance, and teleservice centers, can be located in rural areas. Although it is not clear from the information we collected whether any of the federal agencies that located sites in urban areas could have located them in rural areas, one matter that is clear is that RDA has not had the influence on federal siting practices that the Congress appears to have intended when RDA was enacted. Many agencies had no RDA policy, as required by the act, and many agency personnel in our survey either did not consider RDA or did not know whether the act was used in making their site selection decisions. If agencies had RDA policies and agency personnel were aware of and considered them, certain constraints would still exist that impede efforts to locate in rural areas, such as inadequate infrastructure for high-speed telecommunications, limited public transportation, and a limited labor force. In the future, some of these constraints may be mitigated for a number of rural areas, but for the federal government to cost effectively consider rural as well as urban areas, we believe the following must occur: The government needs to have a cohesive, governmentwide site location policy that considers costs to the government as well as the goal of enhancing the socioeconomic status of urban areas and rural areas. We do not believe that the public policy objectives of assisting either urban or rural areas in a way that will allow agencies to fully and effectively achieve their missions preclude agencies from considering other factors such as the availability and cost of labor, real estate costs, operational costs, and certain local incentives. In fact, a more cost-conscious federal siting policy may even increase agencies’ consideration of rural areas, since rural areas may have lower overall costs. However, we also recognize that in making siting decisions, the agency’s ability to achieve its mission can be a more important consideration than costs. Federal agencies need to have clearly stated and documented policies on site location that conform to governmentwide policy, including RDA; and GSA and other agencies need to document their consideration of RDA to ensure consistent policy application. As a central management agency, GSA could require any agency subject to its authority to do this. Federal agencies need one, clear definition of “rural area” for the purposes of implementing facility siting under RDA. We suggest that Congress consider (1) enacting legislation to require agencies to consider, along with their missions and program requirements, real estate, labor, and other operational costs and applicable local incentives when deciding whether to relocate or establish a new a site in a rural area or urban area, and (2) amending RDA to clarify the definition of “rural area” for facility siting purposes to facilitate its implementation. We recommend that the Administrator of GSA, in GSA’s role as the federal government’s central property management agency, revise its guidance on federal facility siting to (1) advise customer agencies that they should consider, along with their missions and program requirements, real estate, labor, and other operational costs and applicable local incentives when deciding whether to relocate or establish a new site in a rural or urban area; (2) require that each federal agency subject to GSA’s authority provide a written statement to GSA demonstrating that, in selecting a new facility location, the agency, as required by RDA, had given first priority to locating in a rural area, and if a rural area was not selected, the agency’s justification for the decision; and (3) define the term “rural area” to provide its customer agencies with a single definition for purposes of federal siting under RDA, until the Congress amends RDA to define the term. We provided copies of a draft of this report for comment to the heads of 21 federal agencies. The agencies included both the agencies in our survey and departmental agencies from which we obtained additional site location information. We received written comments from 14 of the agencies and oral comments from 7 of the agencies. Seventeen of the agencies responded that they either had no comments on the draft report, agreed with the information in the report, or suggested technical changes, which we considered and incorporated within this report where appropriate. The remaining four agencies provided more extensive comments, which are discussed below. The GSA Administrator provided written comments dated July 16, 2001, which are reprinted in appendix VII. The Administrator stated that references in our report to GSA as the government’s central real property management agency were somewhat misleading, since GSA administers only about 10 percent of the total federal real property inventory and, therefore, GSA has no authority to establish governmentwide policy. However, we note that GSA’s mission statement identifies it as one of three central management agencies in the federal government. According to GSA, its inventory includes 40 percent of all federal office space, which is occupied by 1 million civilian federal employees, approximately half of the total federal civilian workforce. Thus, GSA’s policies would affect almost half of the federal government’s civilian office space, the type of space that was included in our survey. We agreed with the Administrator’s statement in his comments that agencies acquiring property independently of GSA are not subject to GSA regulations, and we have revised this report accordingly. The Administrator also said that our 1990 report, which we referred to in our draft report, called for GSA to develop a governmentwide location policy, and he added that GSA could not have done so since it lacked the authority. Our 1990 report did not call on GSA to develop this policy under its authority, but instead recommended that GSA propose a policy to Congress as a matter for consideration. The Administrator also said GSA had no mechanism for implementing a governmentwide leadership role in 1990, while that might be possible now through its Office of Governmentwide Policy. As previously mentioned, we recommended in 1990 that GSA develop such a policy for congressional consideration. The Administrator also said our draft report implied that GSA selected the geographic area for agencies’ site acquisitions. We did not intend that implication, and we have revised this report to clarify that issue. In addition, the Administrator pointed to GSA’s efforts to make its customer agencies aware of RDA requirements. In our report, we noted GSA’s regulations require RDA compliance by customer agencies. Nonetheless, RDA was not often used in the site acquisitions we surveyed, and some agencies said they were not aware of RDA requirements. The Administrator also responded to our recommendation that GSA require written statements from each customer agency demonstrating that the agency had given first priority to locating in a rural area and, if a rural area was not selected, the agency also include a justification for the decision to GSA. He agreed to require a written statement from customer agencies regarding use of RDA in site acquisitions. However, he did not agree to asking for a justification because he said this would put GSA into the position of second-guessing the agencies because he believes that the agencies have authority to decide where to locate their facilities. While we agree with GSA on the latter point, we remain convinced that a justification is needed to help document that agencies gave first priority to rural areas when they did not choose a rural area. We are not recommending that GSA be required to evaluate these justifications. The Administrator also responded to our recommendation that GSA define “rural area” to provide agencies with a single definition for the purpose of federal siting under RDA until the Congress amends RDA to define the term. He said GSA will develop a definition for use by its customer agencies, but it has no authority to establish a definition for all federal agencies. We clarified our report to reflect GSA’s authority to develop a definition only for its customers. GSA did agree, however, to issue a bulletin to make other agencies aware of this definition. We believe that GSA’s definition should be useful to other agencies until Congress amends RDA to set forth a statutory definition. We also received written comments from the Department of the Interior’s Acting Assistant Secretary of Policy, Management and Budget dated July 3, 2001, which are reprinted in appendix VIII. The Acting Assistant Secretary responded that the agency generally agreed with the findings and agreed in part with the matters for congressional consideration and the recommendations for executive action. Our report suggested that Congress consider enacting legislation to require agencies to consider certain costs along with agency mission when deciding whether to locate a site in a rural or urban area. He responded that our suggestion should be limited to the establishment of new offices because agencies have different considerations, for example, relocation costs, when expanding operations at an existing location, as compared to establishing a new office. We did not intend our recommendation to apply to situations in which an agency expands an operation at an existing site that does not involve a relocation or establishment of a new site. We clarified our recommendation in this regard. The Acting Assistant Secretary also commented that our recommendation that GSA require customer agencies to provide a written statement to GSA demonstrating that the agency had given first priority to a rural area should (1) be required only if the agency does not select a rural area, (2) be limited to a minimum dollar threshold that would exempt certain locations from the documentation requirement, and (3) exempt operations that are being expanded in the same local area. Our recommendation, as noted in the draft report, states that all site decisions should include a written statement to GSA and a justification only should be provided if a rural area was not selected. Although we agree that the establishment of a minimum dollar threshold may be reasonable conceptually, we note that RDA does not include a dollar threshold for application of the act’s requirements. We also believe that expansions of existing operations should be subject to this requirement if they might involve a relocation. The two Department of the Treasury components in our survey also provided written comments. We received comments from IRS’ Director of the Office of Real Estate and Facilities Management dated June 26, 2001. IRS’s comments on this report covered four areas: (1) the use of RDA, (2) compliance with RDA requirements, (3) agencies’ ability to consider costs when selecting new sites, and (4) technical considerations. With respect to the first point—use of RDA, IRS said that the RDA’s encouragement of locating in rural areas needs to be balanced against other legal requirements that sometimes contradict RDA requirements, such as those of CICA, and OMB and congressional budget requirements and limitations, and short-term and long-term cost considerations; a “rural area” should be defined in a way that achieves the intent of the RDA and be based on terms other than population alone; and if Congress supports a location policy that is economically rather than socially based, then Congress should repeal RDA and replace it with legislation that would require agencies to meet specific threshold terms specified in the legislation. We agree that agencies need to consider a variety of legal requirements when selecting a new site for their facilities as well as costs. However, the statutory requirement imposed by RDA must be given priority. We also believe that if Congress defines “rural area” for purposes of RDA, it may want to consider factors in addition to population. By stating in our report that cost factors should be considered in the location process, we are not suggesting that Congress enact a location policy based solely on economics. Rather we are saying that cost should be one of the factors considered in the decisionmaking process. With respect to its second point, on compliance, IRS said that agencies selecting their own sites without GSA assistance are to be held directly accountable for compliance with RDA and, therefore, GSA should not be required to evaluate or enforce compliance with the RDA. Additionally, IRS said that if an agency is using GSA to acquire a site, a simple statement that the agency considered the RDA should be sufficient. We recommended that GSA require a written statement only for federal agencies subject to its authority. We are not recommending that GSA enforce compliance with RDA for agencies that have and use their own authority to acquire space. In those cases where GSA acquires space for other agencies, we believe that providing GSA with a justification for a site selection that includes the reasons for not choosing a rural area under the RDA will help document that the agency gave consideration to RDA. Regarding IRS’s third point, IRS said that, in considering costs, most agencies have no means to assess project costs, such as real estate or labor costs, across geographic areas. The agency added that market data on rural areas are not readily available or readily accessible to compare them with alternative geographic areas. We believe that GSA and OPM can provide much of the information needed to do cost analyses. Furthermore, private sector companies are able to make such analyses and gain access to them. Finally, several agencies in our survey said their site selection process included cost analyses of alternative geographic areas as well as cost analysis of sites within a geographic area. Regarding IRS’s fourth point, its technical comments, IRS thought we should make distinctions between leased occupancies and new federal construction because of the greater time commitment for continued occupancy in new construction. We do not agree with IRS on this point. RDA does not distinguish between leased and owned space, and in our view, it is as important to consider costs and other factors regardless of whether space is leased or owned, particularly considering that many leases are for long time periods. On July 3, 2001, we received written comments from the U.S. Customs’ Director, Office of Planning, which are reprinted in appendix IX. The Director responded that Customs concurred with the report’s recommendation to GSA to revise its guidance on federal facilities and stated that information in the report about Customs’ facility acquisition process and factors used by Customs to select the sites in our survey was correct. He also stated that when Customs acquires property under its existing statutory authority, it utilizes the same process as GSA; and, although not mentioned in our draft report, Customs applies GSA’s basic policy to house agencies in existing federally owned and leased space before acquiring additional space. The Director of Planning also stated that many of Customs’ facilities are unique because the operation requires proximity to the border, an airport, or a seaport, and difficulties sometimes arise in complying with RDA and the pertinent executive orders because many of the land border crossings, airports, and seaports are not located in the central business area of either a rural area or an urban area. We agree and acknowledged in this report that agency mission requirements primarily dictated the location of the sites in our survey. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 21 days after its issue date. At that time, we will send copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs; the Chairman and Ranking Minority Member, House Committee on Government Reform; the Chairman and Ranking Minority Member, Senate Committee on Environment and Public Works; the Chairman and Ranking Minority Member, House Committee on Transportation and Infrastructure; the Chairman and Ranking Minority Member, Senate Committee on Agriculture, Nutrition and Forestry; the Chairman and Ranking Minority Member, House Committee on Agriculture; the House and Senate Appropriations Committees; Representative Ernest J. Istook; the Director of the Office of Management and Budget; and the Administrator of GSA. We will also make copies available to others upon request. If you have any questions about this report, please call me on (202) 512- 8387. Key contributors to this report are acknowledged in appendix X. Our objectives were to determine (1) what executive branch civilian non- Department of Defense (DOD) functions have recently selected urban locations other than Washington, D.C., and the federal cities, compared to rural locations; and what factors, benefits, and problems were associated with such site selections; (2) what federal laws and policies govern facility location and to what extent have agencies implemented this guidance; (3) what lessons can be learned from private sector site selections; and (4) what functions lend themselves to being located in rural areas. To address the first objective, we looked at (1) sites selected by the General Services Administration (GSA), the government agency that has authority to acquire space on behalf of executive branch agencies and (2) sites selected by executive branch agencies using their independent statutory authority. We chose to look at sites acquired independently of GSA to determine whether agencies, when acting independently, engaged in practices that were different from those of agencies that used GSA for their acquisitions. We looked at those sites that were acquired from fiscal years 1998 to 2000. In establishing an appropriate site size to study, we wanted to choose sites that were large enough to have some economic impact on the community in which they were located, that were sufficient in number to provide useful information, and for which sufficient information was available. Accordingly, we decided to consider only those sites with space of 25,000 square feet or more. Regarding manageability, GSA advised us that spaces of this size were small enough that they would be found on GSA’s inventory in all of its 11 regions. Concerning economic impact, GSA advised us that spaces of 25,000 square feet or more would tend to be associated with a relatively larger number of employees than spaces of less than 25,000 square feet and would consequently have a greater economic impact. Finally, in considering the availability of information, we discovered that if a space has 25,000 or more square feet, the agency requesting that site can officially appeal any GSA revision of the delineated area in which that agency wishes to search for a site. As a result, we thought the appeals process would make information on such sites more readily available. We selected fiscal years 1998 through 2000 to obtain the most recent complete data available. As agreed with your office, we excluded Washington, D.C., and the 10 agency regional cities because of your request to see site acquisitions made outside of those cities. We focused exclusively on new sites, rather than locations where leases had been renewed. In addition, we excluded spaces acquired by the judicial and legislative branches of the federal government because these branches are not subject to the Rural Development Act (RDA), which is applicable to executive departments and agencies. We also excluded sites acquired by DOD because DOD informed us that it has so much vacant space available at its bases nationally that it has no choice but to consider its existing vacant space when locating new or existing operations. We excluded the sites acquired by the United States Postal Service (USPS) because USPS advised us that it had little or no discretion in deciding where to locate most of its facilities, in that they needed to be in specific locations to serve customers or near airports. In addition, the Postal Reorganization Act of 1970 exempts USPS from federal laws relating to contracts and property. Further, USPS has authority to acquire space independently of GSA. GSA provided us with a list of 166 sites it had recently acquired for agencies. After excluding sites on the basis of the previously discussed criteria, the total number of GSA-acquired qualifying sites was reduced to 81, representing 29 agencies. We did not independently verify the completeness or accuracy of the site data provided by GSA. GSA also provided us with a list of 52 agencies, including cabinet departments and their components, that have some level of statutory authority to acquire space independently of GSA. After excluding agencies from the list on the basis of the previously discussed criteria, we reduced the total number of agencies to 33. We subsequently contacted the 33 agencies, asking each whether it had, independently of GSA, used its statutory authority to acquire, during fiscal years 1998 through 2000, sites that met our criteria. All 33 agencies responded, and 12 agencies identified 37 sites meeting these criteria. Of the 12 agencies, 5 were not among the 29 agencies represented by the 81 sites GSA helped agencies to acquire. Therefore, our total universe was 118 sites (81+37) represented by 34 agencies (29+5). Using a 28-question, mail-out survey form, we surveyed agency officials at the 118 sites. As of May 3, 2001, we had received responses for 115 of the 118 sites, for a response rate of 97.5 percent. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted by the survey respondents could introduce unwanted variability in the survey's results. We took steps in the development of the questionnaire, the data collection, and the data editing and analysis to minimize nonsampling errors. These steps included pretesting the questionnaire with officials of the Department of Veterans Affairs and the National Institutes of Health, prompting potential respondents in order to increase our survey's response rate, and editing the questionnaires for completeness and accuracy. To determine whether any of the sites were in rural areas, we reviewed RDA to obtain a definition for rural. However, RDA’s definition for rural was unclear, and we found application of it would be impractical. For the purpose of locating federal facilities, RDA states that rural areas shall be defined as those areas identified by the private business enterprise exception in 7 U.S.C. § 1926(a)(7). Prior to 1996, the private business enterprise exception in 7 U.S.C. § 1926(a)(7) defined rural areas as including all territory of a state that is not within the outer boundary of any city having a population of 50,000 or more and its immediately adjacent urbanized and urbanizing areas with a population density of more than 100 persons per square mile, as determined by the Secretary of Agriculture, according to the latest decennial census of the United States. In 1996, 7 U.S.C. § 1926(a)(7) was amended and no longer includes the private business enterprise exception. Therefore, the appropriate definition of rural area under RDA is unclear. Furthermore, we identified two problems with the pre-1996 definition. First, determining the population density for communities adjacent to these federal sites was not feasible within the scope of this job. Second, the term “outer boundary” in this definition lacks specificity. The current definition of rural in 7 U.S.C. § 1926(a)(7) is for purposes of water and waste disposal grants and loans and defines rural as a city, town, or unincorporated area that has a population of no more than 10,000 inhabitants. We are not certain that this is the appropriate definition since it refers to water and sewer grants and not the private business enterprise exception. The prior threshold, which was eliminated in 1996, used a population threshold of 50,000 and included a population density requirement. Population density data were not readily available; therefore, it was not feasible for us to use this definition. For this survey, we chose a threshold of 25,000 or less because it was used to define rural areas by several other federal agencies and private sector organizations that we identified. When we applied this population threshold of 25,000 to the sites on the list of 81 GSA-acquired federal sites, we determined that 23 were located in rural communities; and of the 37 sites that agencies acquired independently of GSA, 9 were located in rural communities. Thus, our survey included a total of 32 rural sites. We note that 26 of the “rural” sites in our survey that fall within the 25,000 population threshold were actually located in metropolitan statistical areas in which large cities are located. To address the second objective, which concerned federal laws and policies that affect the selection of sites, we reviewed federal laws, executive orders, and policies that relate to the location of federal facilities. We also conducted interviews with officials of GSA’s Office of Governmentwide Policy, the chief realty officers of 13 of the 14 cabinet agencies, and an Office of Personnel Management official on federal employee compensation and relocation benefits. Furthermore, we asked survey respondents to identify whether they had applied the relevant laws and policies when making a site acquisition. We also examined GSA lease files created between 1989 and 2000 in three GSA regions—the Rocky Mountain Region in Denver, CO; the Greater Southwest Region in Ft. Worth, TX; and the Mid-Atlantic Region in Philadelphia, PA—where we were already conducting an examination of GSA files for another assignment. We examined the files for documentation regarding application of RDA. However, we did not attempt to verify whether GSA or other agencies were in compliance with RDA. To address our third objective, we contracted with a private sector consultant to (1) perform a literature search, interview experts in corporate real estate consulting, and survey corporations that had made recent site selection decisions; (2) determine the factors and criteria the private sector uses to select urban, suburban, or rural office locations; (3) identify types of office functions (such as claims processing) that lend themselves to being performed in more rural areas; (4) identify, to the extent possible, similar federal functions; and (5) identify and explain how technological advances in the last decade have reduced the disadvantages previously associated with rural areas and what impact U.S. economic changes have had on facility location decisions. Our consultant reviewed relevant professional literature, surveyed a judgmental sample of private sector firms, and analyzed selected economic data for indicators of private sector location practices. Our consultant's results are not statistically representative of private sector locations practices because of the following factors: (1) a judgmental sample rather than a random sample was used, (2) 17 percent of those surveyed responded, and (3) no evidence was provided that those who responded were distributed proportionately across industry-type and geographic region to the proportions corresponding to these factors in the population of the 1,000 largest U.S. companies. Our consultant also did not empirically determine whether the same factors that influence private sector location decisions are applicable to location decisions of federal facilities. Information obtained from our consultant was still very useful for our review because the information included data from both survey respondents and an extensive literature search on factors involved in corporate location decisions. Also, although our consultant's study included various types of companies, the study's focus was on the location of offices of those companies. Offices in the consultant's survey performed such functions as professional services, management, computing, secretarial, clerical, and administrative, functions that are similar to government functions. To accomplish the fourth objective, which concerned the potential of certain federal functions to relocate to rural areas, we used the agency survey described above and interviewed officials at 13 of the 14 cabinet agencies about the location of functions—such as printing, personnel benefits administration, and procurement—that are often conducted on an agencywide basis. Experts in government management and personnel management had identified such functions as those that could be conducted in nonurban areas. At these agencies, we contacted the chief technology and human resources officials to inquire whether each of these agencywide functions was being conducted in an urban or a nonurban area and why. These officials were also asked to report the impact of telecommunications technology on the location of these agency functions and whether technology had made rural areas more viable as site locations. We also reviewed several of our reports, which provided background information on all four of our objectives. We did our review between August 2000 and May 2001 in Washington, D.C., and in the cities of Philadelphia, PA; Denver, CO; and Fort Worth, TX, cities where we were already conducting an examination of GSA files for another assignment. Our review was conducted in accordance with generally accepted government auditing standards. The Smithsonian Institution is an independent trust instrumentality of the United States. (Not in MSA) Washington, D.C. (Not in MSA) (Not in MSA) Washington, D.C. Washington, D.C. Washington, D.C. (Not in MSA) Washington, D.C. Washington, D.C. (Not in MSA) Washington, D.C. Washington, D.C. (Not in MSA) As table 9 shows, some agencies have been provided independent statutory authority to acquire real estate, and some agencies have broad authority. For example, the Tennessee Valley Authority is authorized to purchase or lease real estate property that it deems necessary or convenient in transacting its business; and the Securities and Exchange Commission is authorized to enter into real property leases for office, meeting, storage, and other space as is necessary to carry out its functions. Other agencies' statutory authority to acquire space is for more limited purposes. For example, the Secretary of the Interior is authorized to lease buildings and associated property for use as part of the National Park System, while the Secretary of the Treasury is authorized to lease space for the storage of unclaimed or other imported merchandise that the government is required to store. In addition to those named above Alan Belkin, Lucy Hall, Brandon Haller, Stuart Kaufman, Thomas Keightley, Gary Lawson, Susan Michal-Smith, Edward Warner, and Greg Wilmoth all made key contributions to this report. | Concerns have been raised that federal agencies may not have been considering locating facilities in rural areas, as required by the Rural Development Act of 1972 (RDA), despite recent advances in telecommunications technology. GAO found that, since its 1990 report (GGD-90-109) on this issue, federal agencies generally continue to be located in higher cost, urban areas. Eight of the 13 cabinet agencies surveyed had no formal RDA siting policy, and there was little evidence that agencies considered RDA's requirements when siting new federal facilities. Furthermore, GSA has not developed the cost-conscious, governmentwide location policy recommended by GAO in 1990. In GAO's survey, the sites that involved relocated operations still largely remained in urban areas, while the sites that involved newly established operations were more evenly spread over rural and urban areas. Federal agencies' mission requirements, such as the need to be near clients or other organizations, apparently have led them to select urban areas. GAO found that government functions, such as research and development, data processing, accounting and finance, and teleservice centers, can be located in rural areas. Although it is unclear from the information GAO collected whether any of the federal agencies that located sites in urban areas could have located them in rural areas, it is clear is that RDA has not had the influence on federal siting practices that Congress intended. Many agencies had no RDA policy, as required by the act, and many agency personnel in GAO's survey either did not consider RDA or did not know whether the act was used in making their site selection. |
Increasingly, public attention has focused on the health insurance status of Americans between the ages of 55 and 64. Although federal legislation establishes the normal retirement age for full pension benefits at 65, many individuals leave the labor force 5 to 10 years earlier. Labor force participation rates among 55- to 64-year-old men have declined since at least the 1960s. For those who retire before becoming eligible for Medicare, the availability of health benefits is of particular concern. Coverage for most Americans is tied to employment—the very link that is severed by retirement or loosened by a person’s gradual detachment from the labor force. Since 55- to 64-year-olds are more likely to use medical services, insurance they purchase directly in the individual market may be expensive and harder to pay for, considering the decline in income as a result of retirement. Because fewer employers offer retiree health coverage as a benefit and individually purchased insurance, when available, may be prohibitively expensive, the proportion of this age group that is uninsured may rise. Although most of the near elderly receive coverage as a benefit through their employer, some purchase health insurance on their own. The former is commonly referred to as employer-based group coverage and the latter as individual coverage. Complementing these two types of private health insurance are public programs, including Medicaid for the poor and Medicare for the elderly and disabled. Fundamental differences distinguish employer-sponsored group coverage from the individual insurance market and public insurance programs. Employer-Based. Eligibility for group health coverage through an employer typically depends on holding or having held a full-time job or working a sufficient number of hours to meet a minimum eligibility requirement. Increasingly, however, firms are imposing age and length of service eligibility requirements for retiree health benefits. Premiums in the group market are often considerably lower than those in the individual market because they are based on the experience of the entire group, and the larger the group, the smaller the impact of high-cost individuals on the overall premium. Also, individuals with employer-based coverage do not face the task of accessing the insurance market or identifying and comparing a multitude of products on their own. Rather, the employer arranges access and greatly simplifies the task of identifying and comparing products. Employers who offer health coverage generally provide a comprehensive benefit package with an associated deductible and copayment. Normally, annual out-of-pocket costs are capped, and health services beyond that point are reimbursed at 100 percent. Finally, selecting cost-sharing options and paying for the products is often eased by employer contributions and payroll deductions. Individual Market. Instances when Americans may turn to the individual market for health insurance include employment in part-time or temporary jobs, periods of unemployment between jobs, and retirement prior to Medicare eligibility. Unlike employer-based health benefits, however, eligibility and premiums in the individual markets of many states are determined on the basis of the risk associated with each applicant’s demographic characteristics and health status. As a result, coverage in the individual market for those aged 55 to 64 and for individuals whose health is declining may be unavailable or considerably more expensive. Since consumers must absorb the entire cost of coverage themselves, carriers have recognized the importance of offering affordable options to people with different economic resources and health needs, and offer a wide range of health plans with a variety of covered benefits and cost-sharing options. The cost-sharing arrangement selected is a key determinant of the price of an individual insurance product—the higher the potential out-of-pocket expenses, the lower the premium, and the greater the financial risk to the consumer. Finally, because carriers in many states can exclude preexisting health conditions from coverage, the benefits purchased may not be comprehensive. Recent federal legislation, discussed below, prevents preexisting condition exclusions for eligible individuals leaving group coverage. Public Insurance Programs. Significant differences also exist in eligibility for and coverage available through public programs such as Medicaid and Medicare. Medicaid, financed jointly by the federal government and the states, is the dominant public program for financing health coverage for low-income Americans—families, primarily women and children, and the aged, blind, and disabled. Medicare is a national insurance program established in 1965 for elderly Americans aged 65 or older. For Americans under age 65, only those with end-stage renal disease or those who have been determined disabled under the Social Security Act qualify for Medicare. Disabled individuals must fulfill a 2-year waiting period before they are eligible for Medicare; however, in most states, the low-income disabled who receive Supplemental Security Income automatically qualify for Medicaid. Medicare benefits contain more gaps than those offered through Medicaid or a large employer. For example, standard (fee-for-service) Medicare has separate benefits for hospitalization (part A) and physician/outpatient (part B) services. Those eligible for Medicare are automatically enrolled in part A but must pay a premium to elect part B coverage. Part A has a relatively high deductible for each hospitalization and requires copayments for stays longer than 60 days. Part B has a separate deductible, requires 20 percent coinsurance for physicians’ bills, and does not cover prescription drugs. Unlike most employer-based insurance, neither part A nor part B has a limit on out-of-pocket costs. To cover some of the gaps in Medicare coverage, beneficiaries often purchase Medigap insurance; alternatively, if available, they may enroll in a Medicare managed care plan, which generally offers a richer benefit package than fee-for-service Medicare, often with no premium. Finally, some beneficiaries have access to employer-based retiree health benefits, which supplement their Medicare coverage. Medicaid, like most employer-sponsored coverage, offers a comprehensive benefit package, but the depth of coverage varies substantially among states. Federal guidelines require coverage of a broad range of services, including inpatient and outpatient hospital care, physician services, laboratory services, and nursing home and home health care. Most of those enrolled in the program incur no out-of-pocket expenses. Although the decision to offer health benefits to workers or retirees is essentially voluntary, several federal laws have influenced their provision by employers. For example, since 1954, the tax code has encouraged employment-based health coverage by making employer health benefit payments tax deductible and by excluding employer-provided benefits from employees’ taxable income. Also, ERISA, which was enacted in 1974, allows employers to offer uniform national health benefits by preempting states from directly regulating employer benefit plans. ERISA, however, does impose some federal requirements on employer-based plans, including requirements to provide employees with a plan description within 90 days of enrollment and implement a process for appealing claim denials. Because of the federal preemption of state regulation, the rights of active and retired employees under ERISA are largely determined in the courts. Appendix I contains a description of the role of ERISA in safeguarding access to coverage provided voluntarily by an employer. In addition, federal law guarantees that individuals leaving employer-sponsored group health plans have access to continued coverage, and ultimately to a product in the individual market. First, the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA), which amended ERISA, requires group health plans covering 20 or more workers to offer 18 to 36 months of continued health coverage to former employees and their dependents in certain circumstances, such as when an employee is terminated or laid off, or quits or retires. Although COBRA is not specifically targeted at the near elderly, it clearly provides this age cohort with the opportunity to continue health coverage as they transition from the active workforce to retirement. The mandate to offer continuation coverage, however, does not oblige employers to share in the premium. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) further guarantees access to individual market coverage to individuals leaving group health plans. Group-to-individual portability is available to eligible individuals who, among other criteria, have exhausted their available COBRA or other conversion coverage, regardless of their health status and without the imposition of coverage exclusions for preexisting conditions. HIPAA, however, does not provide similar guarantees of coverage for others in the individual market. The Chairman, Senate Committee on Labor and Human Resources, asked us to assess the ability of Americans aged 55 to 64 to obtain health benefits through the employer-sponsored or individual insurance markets. He specifically asked for information on the near elderly’s (1) health, employment, income, and health insurance status; (2) ability to obtain employer-based coverage if they retire before becoming eligible for Medicare; and (3) use of and costs associated with purchasing coverage through the individual market or COBRA continuation insurance. To determine the demographic and health insurance status of the near elderly, we analyzed the March 1997 Current Population Survey (CPS). Appendix II discusses some of the strengths and limitations of the CPS and other surveys that we considered. As part of our analysis of the CPS, we separately examined two subgroups of the near elderly—those aged 55 to 61, who are more likely to be in the labor force, and those aged 62 to 64, who have a greater chance of being retired. Since the March CPS asks respondents about their employment, retirement, health, income, marital, and social security status, we were able to make observations about the relationship of these variables to the health insurance status of the near elderly. To supplement CPS data on the health status of this age cohort, we also obtained more objective data on their health conditions, health care use, and health care expenditures from the Agency for Health Care Policy and Research and the National Center for Health Statistics. To determine trends in employer-based health insurance coverage for those who retire before reaching Medicare eligibility, we conducted a literature review on employer-based health benefits for early retirees. The focus of that review included information on (1) factors contributing to the decline in employer-based benefits, (2) terminations of retiree coverage, (3) changes in the terms and conditions under which coverage is made available to both current and future retirees, and (4) retirement and the influence of health benefits. We culled data on more recent trends in retiree coverage from periodic surveys sponsored by private benefit consultants and by the federal government. In general, we only reported trend data from nationally representative surveys. Information on continuation coverage is not available from the March 1997 CPS. Consequently, in order to examine the extent of the near elderly’s utilization of COBRA coverage, we relied on analyses of two special CPS supplements sponsored by the Pension and Welfare Benefits Administration of the Department of Labor—one conducted in 1988 and a second in 1994. We supplemented these analyses with data drawn from (1) the administrative records maintained by a COBRA third-party administrator and (2) an annual survey that attempts to measure adverse risk selection as a result of COBRA. To determine the access of the near elderly to the individual insurance market, we updated information collected in our 1996 report on the cost and coverage trade-offs faced by Americans who rely on this market for coverage. In particular, we contacted officials from a number of state insurance departments and insurance carriers to obtain information about carrier underwriting practices, current premium prices for the most popular products, and recent state and federal legislation that affect individuals’ access to this market. Because certain aspects of individual insurance markets can vary significantly among states, our 1996 study relied on case studies of such markets in a number of states. Although the findings from these states, including the premium prices of individual products, cannot be generalized to the nation as a whole, we believe they are reasonably representative of the range of individual insurance market dynamics across the country. Also updated were 1995 data for each state concerning individual market insurance reforms, high-risk pools, and insurers of last resort. The following chapters of this report focus on how the near elderly obtain health insurance and the obstacles they face in doing so. Understanding a few key distinctions among the various types of surveys used will facilitate the understanding of the data presented in this report. First, surveys can have different units of analysis. Certain surveys are based on interviews with a sample of individuals, some of whom are near elderly; others are the product of information collected from a sample of employers or establishments. Because of these different units of analysis, it is often difficult to make comparisons across the two types of surveys. Second, although various surveys collect information relevant to understanding the insurance status of the near elderly, 55- to 64-year-olds are not usually their primary focus. As a result, a particular sample may not be sufficiently large to precisely answer questions about a certain subset of the near elderly. Conversely, the survey (or an analysis by others) may have defined the near-elderly group differently, making it difficult to report on an issue with respect to 55- to 64-year-olds. Changes in survey methodology over time often preclude or complicate the identification of insurance trends among the near elderly. This is particularly true about employer survey data from the 1980s but also affects some surveys conducted in the 1990s. Though the changes may have improved the reliability and relevance of the data, they are often not comparable with earlier results from the same survey. Finally, some of the data sets are proprietary, and not all of the information collected is publicly available. The sample sizes, and thus the precision of the estimates derived, vary. Throughout this report, we alert the reader to the source of the survey data being reported, any limitations in that data, and any caveats that must accompany the survey findings because of the size of the sample. A number of experts on retiree health benefits and insurance markets commented on a draft of this report. They generally agreed with our presentation of the evidence on the near elderly’s access to health insurance. We incorporated their comments as appropriate. Our review was conducted between August 1997 and January 1998 in accordance with generally accepted government auditing standards. Because near-elderly Americans between the ages of 55 and 64 are different from younger age groups in terms of health, work, and income status, their access to and sources of health insurance also differ. This chapter uses the March 1997 CPS to depict the demographic and insurance characteristics of the near elderly and two subgroups—those aged 55 to 61 and 62 to 64. Compared with younger age groups, the near elderly exhibit declining workforce attachment, health, and income. As the near elderly retire or cut back on their hours of work, they run the risk of severing their link to employer-based health insurance. Nonetheless, the percentage of uninsured in this age group is relatively low because of their increased reliance on health insurance through the individual market, Medicaid, and Medicare. Health, income, and employment status appear to influence how the near elderly obtain coverage. In general, those with individual insurance appear to have more in common with recipients of employer-based coverage than with the near elderly who had other sources of health benefits such as Medicaid or Medicare. Specifically, a smaller percentage of those with employer and individual coverage had low incomes, were minorities, were not working, or were in poor health. Key differences between those with individual and employer-based coverage, however, are that a larger percentage of the former were women, unmarried, unemployed, and with low incomes. There is also a similarity between the 55- to 64-year-olds who had public insurance and those who were uninsured. As compared with those with other sources of coverage, a higher percentage of both groups had low incomes, were minorities, were not working, or were in poor health. Again, however, there were important differences between these two groups. Specifically, compared with those with public insurance, the uninsured were more likely to work, be married, have better health, and have higher incomes. Differences in health, labor force attachment, and family income distinguish the near elderly from younger Americans, underscoring the importance of access to affordable health insurance for this age group. The near elderly comprise about 21 million Americans. One of the fastest growing age cohorts, this group is projected to increase to 35 million over the next 12 years and to nearly double between today and the year 2020—jumping from 8 to 13 percent of the U.S. population. The near elderly might best be characterized as a group in transition. Neither young nor old, 55- to 64-year-olds have reached a turning point in their lives. Many are beginning to focus on withdrawal from the labor force and eventual retirement. For some, this disengagement is motivated by chronic conditions or slowly worsening health, conditions that may be work-related. Those near elderly with children see them growing up and leaving home. Finally, family incomes are beginning to decrease as more individuals adjust to living on a pension. Self-reported health status suggests a pattern of declining health as individuals grow older. Such subjective findings are corroborated by more objective data from the National Center for Health Statistics (NCHS) and Agency for Health Care Policy and Research (AHCPR). Compared with younger age groups, individuals aged 55 to 64 (1) have the highest prevalence of many serious health conditions, (2) are the most frequent users of health care services, and (3) incur higher health care expenditures. In response to a health question on the CPS, the near elderly gave the lowest personal assessments of any group (see fig. 2.1). For example, while almost three-quarters of 25- to 34-year-olds rated their health status as excellent, less than one-half of the near elderly reported their health this positively. Conversely, about one quarter of 55- to 64-year-olds assessed their health as poor compared with only 6 percent of those under age 35. Even among the near elderly, self-reported health status worsens with age. As shown in figure 2.2, nearly one-half of 55- to 61-year-olds rated their health status as excellent compared with 41 percent of 62- to 64-year-olds. Conversely, more individuals over age 61 reported that their health was poor. These self-reported health assessments from the CPS are corroborated by more objective data on the health status of the near elderly. Tables 2.1, 2.2, and 2.3 present NCHS and AHCPR data comparing the health status and expenditures of 55- to 64-year-olds with the experience of younger Americans. As demonstrated by table 2.1, the incidence of conditions such as diabetes, glaucoma, heart disease, and hypertension is more prevalent among the near elderly than among younger age cohorts. In addition, the near elderly are the most frequent users of many health care services. Their hospital discharge rates and days of hospital care were 51 percent and 66 percent higher, respectively, than those of 45- to 54-year-olds (see table 2.2). Similarly, the near elderly visited physicians at a rate that was nearly 20 percent higher than that of any younger age group. Finally, the near elderly have the highest annual health care expenditures of any group under age 65—estimated to be about $5,000 per person in 1998—45 percent higher than for individuals 45 to 54 years of age, and more than 120 percent higher than for those aged 35 to 44 (see table 2.3). Although a majority of the near elderly reported that they worked for some period of time in 1996, this age cohort is moving from full-time employment into retirement, a change that may result in the loss of employer-based health coverage. The transition is apparent in data on the work status of the near elderly and is even starker when comparing the experience of 55- to 61-year-olds with those 62 and older. About two-thirds of the near elderly were employed for some period of time in 1996 compared with about 85 percent of those between the ages of 25 and 54. Almost 43 percent were employed full time for the entire year. The remainder either worked full time for part of the year (9 percent) or part time (13 percent). And the majority of part-timers worked fewer than 20 hours per week. Of those who were employed in 1996, about 18 percent were self-employed, with the remainder working in either the private sector or government. The remaining one-third of the near elderly were out of the labor force entirely. As shown in figure 2.3, almost 80 percent of nonworkers reported retirement, illness, or disability as the main reasons for not working. Another one-fifth did not work in order to care for their homes and families. Few of the nonworking near elderly were displaced from a job or looking for work. Only about 117,000 (1.5 percent) reported “inability to find a job” as the main reason for not working. This estimate is corroborated by a related question to which about 155,000 (2 percent) nonworkers said that they had been laid off or were looking for work during that time period. The near elderly did not differ from other age groups in the extent to which they were displaced from work. While the fact that fewer than one-half of the near elderly worked full time for the whole year suggests a transition to retirement, the progression is even more evident when comparing the employment status of the 55- to 61-year-old members of this group with those 62 and older. Figure 2.4 demonstrates that by age 62 an even smaller percentage worked full time and over one-half were not employed at all. Another indicator of detachment from the workforce for 62- to 64-year-olds is the proportion who elect Social Security benefits before they reach the normal retirement age of 65. In 1996, about one-half of this age group who were eligible elected to receive Social Security benefits early with a reduced annuity and only about one-third of those individuals worked at all in 1996. As shown in figure 2.5, the relationship between age and retirement is also reflected in the reasons individuals reported for not working. Almost two-thirds of those 62 and older were retired compared with about one-third of the younger near elderly. However, fewer of the former indicated they did not work because of illness or disability or because they were taking care of home and family. The transition into retirement as the near elderly grow progressively older could, in part, be influenced by their worsening health status. As noted earlier, health status declines with age and self-reported health status is slightly worse for the older members of this age group. When the overall group’s employment status is examined in the context of its health status, we find that a much smaller percentage of those in poor health worked during 1996 compared with those who reported having better health (see fig. 2.6). In 1996, the median family income for people between the ages of 55 and 64 was about $40,000. A comparison of their income with that of other age groups, however, suggests that income peaks before age 55 and then declines. As shown in table 2.4, the median family income rose from a low of about $36,000 for people aged 25 to 34 to a high of $52,000 for 45- to 54-year-olds. In contrast, the median family income dropped for the near elderly. Although the median family income of 55- to 64-year-olds was about $40,000, almost 20 percent of this age group lived close to or below the poverty level. About 18 percent of these individuals had incomes less than 150 percent of the poverty level in 1996, and about 10 percent had a total family income below the poverty level. Figure 2.7 shows the distribution of family income for the near elderly. About one-quarter had a family income of less than $20,000 and almost 40 percent earned less than $30,000. However, over 20 percent of the near elderly had a total family income of $75,000 or more. In addition to changes in health, work, and income status, the interval between ages 55 and 64 is also a transitional period in terms of health insurance. Eligibility for Medicare is up to 10 years away, and employer-based coverage may well end with retirement. Consequently, access to individually purchased coverage and to public programs for the poor and disabled becomes increasingly important with age. For some near elderly, however, the lack of an affordable alternative results in their being uninsured. Given that aging is associated with a higher utilization of health care services, it is not surprising that the near elderly are among the most likely age group to have insurance and the least likely to be uninsured. According to our analysis of the March 1997 CPS, about 18.5 million near-elderly Americans had health insurance at some time during 1996 and the remaining 3 million were uninsured. As shown in table 2.5, the near elderly and those aged 45 to 54 were the most likely groups to be insured. While as likely to have insurance as those aged 45 to 54, the near elderly access their coverage differently (see fig. 2.8). Through age 54, each successive age group was more likely to have employer-based coverage and less likely to be uninsured. This pattern was broken by the near elderly, however, as employer-based coverage was lower than for most other age groups. In part, this reflects their disengagement from the labor force and the lower probability of firms offering retiree coverage. On the other hand, the likelihood of the near elderly being uninsured was no different than that of 45- to 54-year-olds. Individual insurance and public programs such as Medicare compensated for the drop in employer-based coverage for the near elderly. The decreased reliance on employer-based health insurance for the near elderly is most pronounced among the oldest members of the group. As shown in table 2.6, the percentage of 62- to 64-year-olds with such coverage was almost 8 points lower than for the younger members of the near elderly. The further decline in employer-based coverage should be accompanied by changes in the number of uninsured and those obtaining coverage through the individual market and Medicare. All three categories did in fact show an increase among 62- to 64-year-olds; these differences, however, were only statistically significant for Medicare. As noted earlier, the health, employment, and income of individuals change as they grow older. Our analysis of the March 1997 CPS indicates that these changes affect the insurance status of the near elderly. Overwhelmingly, those who have better health, are employed, or have higher incomes are more likely to be insured and to have coverage through an employer. Conversely, those in poor health, who are not working, and who have low incomes have a greater probability of being uninsured or relying on Medicare or Medicaid. Although the data also suggest that certain characteristics are linked to the likelihood of having individual insurance—having better health, working part time, and having low income—the results were not statistically significant. Among the near elderly, a better self-reported health status translated into a greater likelihood of being insured and of obtaining this coverage through an employer. In contrast, those who rated their health as poor were more likely to be uninsured or to obtain coverage through a public program. As shown in table 2.7, only 43 percent of those with poor health had employer-based coverage, while about 76 percent of those with excellent health and 66 percent of those with good health were covered through an employer. And individuals in poorer health were at least 10 times more likely to be covered through Medicare or Medicaid, compared with those in the best of health. Poor health status, however, does not guarantee access to insurance, as reflected in the fact that about 18 percent of the nearly elderly who reported their health status as poor were uninsured. Among the near elderly, there is a link between insurance status and three work-related variables: (1) number of hours worked, (2) nature of the employment, and (3) type of industry. First, the near elderly typically had insurance, but those who worked full time were more likely to be insured. More than 90 percent of the near elderly who worked full time had some kind of health insurance, compared with 82 percent of those who did not work at all. Moreover, the number of hours worked affected the source of coverage—that is, whether the insurance was obtained through an employer, the individual market, or public sources (see fig. 2.9). For example, 81 percent of the near elderly who worked full time in 1996 had employer-based coverage, compared with only 65 percent who worked part time and only 46 percent of those who did not work. These differences are even more dramatic when we distinguish employer-based coverage through the individual’s employer from that obtained through a spouse. Specifically, about 73 percent of full-time workers had coverage through their employer, compared with 46 percent of part-time workers and 25 percent of those who did not work. In addition, those aged 55 to 64 who worked part time were more likely to purchase individual insurance than were those who worked full time. This pattern may be explained by the possibility that those who worked full time were more likely to have employer-based health insurance at retirement. As was the case with health status, there is a relationship between not working and reliance on public sources of coverage. Thus, those who were not employed in 1996 were at least 10 times more likely to have Medicare or Medicaid than the near elderly who were employed full time. Second, the insurance status of 55- to 64-year-olds varied by the nature of their employment, that is, whom they worked for. Thus, individuals who worked for an employer as opposed to being self-employed were more likely to have employer-based health insurance through that employer, while the latter were more likely to have individually purchased insurance. Eighty-three percent of those who worked for a public employer in 1996 had coverage through their employer as did 67 percent of those who worked for a private employer. In contrast, 42 percent of the incorporated self-employed and 27 percent of the unincorporated self-employed had this source of coverage. However, only 4 percent of individuals who worked for a public employer and 6 percent who worked for a private employer had individually purchased insurance compared with more than 20 percent of the self-employed. Finally, health insurance was more common in certain industries. As shown in figure 2.10, the near elderly employed in public administration, manufacturing, mining, transportation, and professional services were the most likely to have health insurance through their employer, while those who performed personal services or worked in agriculture, fishing, and forestry were the least likely to have coverage through this source. As noted in chapter 3, an increasing share of the labor force is working in the service sector, while a decreasing share is working in manufacturing and transportation; hence, the number of retirees without insurance through an employer could be higher in the future. As reported earlier, almost 97 percent of the near elderly who did not work in 1996 reported retirement, illness or disability, or caring for their home or family as their main reason for being out of the labor force (see fig. 2.3). Additionally, a small number (about 117,000 individuals) in this age group indicated that they were unemployed in 1996 because they were unable to find work. Just as the insurance status of the near elderly varied according to their relative attachment to the workforce or to the type of work performed, whether or not a person had insurance as well as the type of insurance they held also varied by the reasons given for not working (see table 2.8). First, whether or not an individual had insurance differed depending on the reason given for not working. For example, about 83 percent of the retired and 88 percent of the ill or disabled had some kind of health insurance, compared with 72 percent of those who were caring for a home or family and only 47 percent of those who could not find work. Second, the source of coverage held by the near elderly differed depending on the reason they did not work. While both the retired and the ill or disabled were the most likely to have health insurance, the former were more than twice as likely to have employer-based insurance as the latter. Conversely, the ill or disabled were more than three times as likely to be covered by Medicare and 10 times more likely to be covered by Medicaid than those who were retired. As shown in table 2.8, those who were caring for a home or family essentially mirrored the retired group with respect to source of insurance. Most of the former individuals, however, obtained coverage through a spouse. Among these four groups, the percentage of uninsured was highest for those reporting an inability to find work, but because of their small representation in the overall sample, we could not make further observations. As mentioned earlier, income is lower for individuals 55 to 64 years of age than for younger groups. Whether or not the near elderly had insurance, as well as their source of insurance, however, differed by income level. Compared with the near elderly with high incomes, those with low incomes were more likely to be uninsured or to rely on Medicaid or Medicare. As shown in table 2.9, the percentage of 55- to 64-year-olds without insurance fell from a high of about 33 percent for those with incomes less than $10,000 to about 6 percent for those with incomes of $75,000 or more. Similarly, the proportion covered by Medicaid and Medicare dropped significantly when incomes exceeded $20,000. The near elderly with low incomes were also the least likely to have employer-based coverage. As shown in table 2.9, those with incomes less than $10,000 had the lowest level of employer-based coverage, while such coverage increased significantly up to the $30,000 income level and then gradually rose as income exceeded this amount. Despite their limited resources, the near elderly with low incomes purchased individual insurance at about the same rate as did those with higher incomes. Although table 2.9 suggests that the low-income near elderly were more likely to purchase individual insurance than those with higher incomes, these differences were not statistically significant. Focusing discretely on the individual demographic characteristics of the near elderly as they relate to insurance status provides a fragmented portrait of those who have a particular type of insurance or who are uninsured. Table 2.10 profiles 55- to 64-year-olds by source of insurance—highlighting the extent to which the most vulnerable have coverage through employer-based, individual, or public insurance or go without insurance altogether. Appendix III has a more detailed profile of the near elderly by source of coverage as well as demographic and insurance profiles of those 55 to 61 and 62 to 64 years of age. In general, the near elderly with employer-based insurance are similar to those with individual coverage. Only a small percentage had low incomes, were minorities, were not working, or were in poor health. Key differences between these groups, however, relate to their gender, marital status, work status, and income. Specifically, as compared with those with employer-based insurance, a larger percentage of those with individual insurance were women, unmarried, and unemployed and had low incomes. Likewise, there is a similarity between 55- to 64-year-olds who had public insurance and those who were uninsured. A relatively higher percentage of both groups had low incomes, were minorities, were not working, or were in poor health. Again, however, there were important differences between these groups. Compared with those with public insurance, the uninsured were more likely to work, have better health, and have higher incomes, but were less likely to be married. Focusing on the most vulnerable, however, obscures the extent to which 55- to 64-year-olds with higher incomes are uninsured. Thus, over 20 percent of the uninsured had incomes of $50,000 or more. Employers have been the main source of health insurance for Americans since World War II. During the 1950s, large employers began to incorporate health coverage for retirees into their benefit packages. The trend toward more widely available and more generous retiree health benefits began to change in the 1980s. Today, many policymakers are concerned about the future viability of employer-based retiree health coverage and the implications for older Americans who are not yet eligible for Medicare. Evidence from several different sources paints a picture of eroding retiree health benefits. Because each of these sources alone gives an incomplete picture, this chapter uses both employer and retiree surveys to describe the current situation and future outlook for employer-based retiree health benefits. The number of medium and large employers offering health insurance to retirees appears to have dropped precipitously from levels reported in the 1980s. Moreover, during the 1990s, it has continued to drift slowly downward. Coincidentally, the decline in employers offering retiree coverage has been exacerbated by a shift in employment away from firms more likely to offer coverage toward those less likely to do so, that is, from manufacturing to service industries. When retiree health benefits are offered by a large employer, retiree participation has also declined—a development attributed to the trend toward greater cost sharing. However, this decline has been offset, in part, by an increase in labor force participation among women. Thus, retirees who decline coverage from a former employer may have access to less expensive insurance through a working or retired spouse. Although the decision by larger employers not to offer retiree health benefits has affected some current retirees, it will have a greater effect on those who will retire in the future. This finding appears to be supported by the fact that the decline in the availability of employer-based coverage has not resulted in as large an increase in early retirees without private health insurance. Though employer surveys demonstrate that fewer firms are offering retiree health coverage, they provide limited evidence as to how changes in the terms under which such benefits are proffered affect their affordability for both current and future retirees. The sketchy evidence available does suggest that retirees are being asked to contribute a larger share of the premium than active employees. If past trends are a reliable indicator, increased cost sharing may suppress the demand for retiree health benefits even though some firms continue to make them available. The erosion in retiree health coverage has persisted, despite a turnaround in two trends that had contributed to the decline—the abatement in health care inflation and the reemergence of a strong, internationally competitive economy. This persistent erosion raises a fundamental question about the future protection available to retired individuals through employer-based health insurance. Employer-based health benefits for active employees had became a standard benefit by the early 1950s. According to Rappaport and Malone, however, retiree health coverage evolved more as an afterthought to pension benefits—a way to ease the transition from employment to retirement. Health insurance was generally considered a goodwill gesture and an inexpensive addition to the total retirement package. Eligibility was usually based on pension plan eligibility, regardless of the retiree’s age or years of service. And many employers paid the full premium for retiree health coverage because of its reasonable cost at the time and the difficulty of collecting premiums from retirees. Medicare, created in 1965, spurred the general expansion of retiree health coverage by making it much less expensive for employers to offer to help meet retiree health care needs. Most employers that provided retiree health coverage did so on a lifetime basis. The trend, especially for firms with labor unions, was to continuously improve retiree health benefits. With relatively few retirees, comparatively small health benefit costs, and a philosophy that American manufacturing would continue to dominate world markets, employers rarely even measured or voiced concern about the cost of retiree medical benefits. This situation began to change during the 1980s. A coincidence of factors and trends gave rise to attempts by some employers to modify or even eliminate retiree health benefits, including (1) sharply rising medical costs, (2) heightened foreign competition, (3) corporate takeovers, (4) the declining bargaining power of labor, and (5) a change in accounting standards. This last factor is often cited as a major contributor to the decline in employer-based retiree health coverage. In 1993, after over a decade of discussion, large employers were required to report annually on the liability represented by the promise to provide retiree health benefits to current and future retirees. The new accounting standard, commonly referred to as FAS 106, does not require that employers set aside funds to pay for these future costs and thus it does not affect their cash flow.There was concern, however, that these liabilities would affect companies’ stock prices. Since employers typically cover retiree health costs as they are incurred, this liability is largely unfunded. The estimated liability in 1988 of between $221 billion and $332 billion was staggering and is widely viewed as having served as a wake-up call to employers about the magnitude of their future obligations. In responding to benefit consultant surveys, many companies cited the fact that FAS 106 results in reductions in reported income and shareholder equity as a reason for reassessing the nature of their commitment to retiree health benefits. The picture of the extent to which large employers offered retiree health benefits during the 1980s is murky at best. Much of the available evidence is from surveys conducted by major benefit consultants using current or potential clients as their sample. Since these clients (larger employers) are more likely to offer retiree health coverage, the estimates derived from such a nonrandom sample are likely to reflect an upward bias. Table 3.1 compares estimates from five such surveys conducted between 1983 and 1988. The results from two surveys—the Washington Business Group on Health (WBGH) and Hewitt—appear to be outliers. The WBGH estimates are based on a very small sample size (131 firms). The Hewitt results are higher than other 1980s estimates and similar to results Hewitt reported in 1997. Thus, Hewitt’s finding that 92 percent of large firms offered early retiree coverage in 1996 suggests that little change has occurred among large employers since 1985. A 1984 Department of Labor survey also sheds some light on the prevalence of employer-based retiree health benefits. At firms with 100 or more employees, 60 percent of workers had their coverage continued when they retired early. These results are in line with the range of estimates shown in table 3.1. While the limited data available suggest that upward of 60 to 70 percent of large employers offered retiree health insurance in the 1980s, far fewer than half do so today, and that number is continuing to decline despite the recent period of strong economic growth. That evidence, from more rigorous employer surveys conducted in the past several years, is corroborated by surveys sponsored by the Labor Department. Results from periodic surveys conducted by two benefit consulting firms, Mercer/Foster Higgins and KPMG Peat Marwick, are consistent and indicate a further decline in the availability of retiree coverage from medium and large employers between 1991 and 1997. Both surveys are based on a random sample whose results can be generalized to a larger population of employers rather than on a database of clients such as that used by Hewitt and others. See appendix II for more information on the characteristics of the Foster Higgins and Peat Marwick surveys. As shown in figure 3.1, Foster Higgins indicated an overall decline of 8 percentage points in coverage offered to early retirees, while Peat Marwick reported a drop of 9 percentage points for all retirees during roughly the same period. Unlike Foster Higgins, Peat Marwick did not report separately on early and Medicare-eligible retirees. The trends outlined in figure 3.1 raise a question about assessments by some experts that retiree health offerings have stabilized or that the decline has been limited. Although the erosion is slow, its cumulative impact is significant. In addition to employer surveys, interviews with retirees provide another, albeit indirect, source of data on employer-based health coverage for the near elderly. A 1995 report by the Pension and Welfare Benefits Administration of the Department of Labor shows the extent to which retirees were covered by employer-based health insurance at various points in time—before retirement, just after retirement, and at some subsequent date. The report compares data collected on retiree health coverage from special supplements to the August 1988 and September 1994 CPSs. The resulting data provide only a limited picture of employer trends because they (1) are based on interviews with retired workers and (2) do not always clearly distinguish between the availability of coverage and a worker’s decision not to participate in employer-based retiree coverage. If a worker did not “continue” such coverage, the individual was asked the reasons for discontinuation. Since questions about reasons for discontinuing coverage were expanded in the 1994 survey, it is difficult to make a precise comparison across the periods. The Labor Department’s analysis of the CPS data revealed a significant erosion between 1988 and 1994 in the number of individuals who retained employer-based health coverage upon retirement. As shown in table 3.2, 42 percent of retirees aged 55 and older continued such coverage into retirement in 1994, a decline of 8 percentage points since 1988. Among the numerous reasons cited in the 1994 survey for discontinuing coverage were (1) “eligibility period expired,” (2) “retirees not covered,” and (3) “became ineligible after employer amended plan.” Combining these three factors, about 34 percent of early retirees in 1994 were not eligible to enroll in an employer’s plan after retirement. Although it is not possible to provide a precise estimate of how much of the decline is due to lower offer rates by employers, it seems reasonable to attribute at least some portion of the decline to this factor. The data also showed that the percentage of individuals with employer-based coverage continued to decrease throughout retirement. Only 34 percent still retained coverage several years after retirement. The decline in participation during retirement has several explanations. First, some individuals elect COBRA at retirement because no retiree coverage is offered. Such coverage, however, is only temporary—generally 18 months for a worker leaving a job. Second, as figure 3.1 shows, firms are less likely to offer coverage to individuals who are Medicare-eligible than to early retirees. Thus, some retirees may have lost employer-based coverage when they reached age 65. Third, some individuals qualify for Medicare before age 65 because of a disability. Fourth, some retirees have access to health insurance through a spouse’s employer. Fifth, some employers may have unexpectedly stopped offering coverage to retirees after an individual retired. Finally, evidence suggesting reduced participation by retirees as a result of employer-required cost sharing will be discussed later in this chapter. Based on our analysis of CPS data, the percentage of early retirees with private health insurance (both employer-based and individually purchased) fell 7 percentage points from 76 percent to 69 percent between 1989 and 1995. The decrease in the proportion of early retirees with private health insurance does not appear to correspond to the magnitude of the decline in the availability of retiree coverage documented in employer surveys and in the 1988 and 1994 CPS supplements. Among the possible reasons for the mismatch between availability and coverage trends are that (1) the decision to retire is often predicated on the availability of health benefits; (2) coverage may be available through other sources, such as a working or retired spouse; (3) employers’ decisions not to offer retiree health benefits are frequently directed at future rather than current retirees; and (4) individuals may have postponed their retirement plans to avoid becoming uninsured or because of the high costs of purchasing individual insurance or COBRA continuation coverage. Appendix IV discusses the available research on the relationship between the availability of health insurance and the decision to retire early. The cancellation of benefits for current retirees, often emotionally charged, has captured the attention of the executive branch, the Congress, and the press. The information available on these terminations, primarily in the form of newspaper articles and information on lawsuits brought by affected retirees, is often anecdotal rather than systematic. The perception that more than just a few employers are terminating coverage for current retirees may be fueled by frequent articles discussing cuts to and changes in retiree coverage. For example, a lengthy lawsuit, tracked by the press since 1989, involves a challenge to General Motors’ cut in health benefits for salaried retirees—that is, an attempt to introduce cost-sharing requirements for what had heretofore been a benefit provided at little or no cost. GM, however, was not attempting to terminate coverage for these retirees—a subtlety that is sometimes lost in the concern over the general erosion of retiree health coverage. In fact, employer surveys indicate that firms are more likely to terminate benefits for future as opposed to current retirees. Fear of litigation as well as ethical and public relations concerns are cited as explanations for why employers have chosen to concentrate their cost-cutting efforts on future retirees. Despite the future focus of many employers’ actions, survey data suggest that current retirees are also being affected by the decline in offer rates. The Foster Higgins data in figure 3.1 reflect the decline in offer rates among employers who make coverage available to “most retirees,” excluding firms who have only terminated health benefits for future retirees or hires, or both. Thus, the 8-percentage-point decline in the number of employers offering early retiree coverage suggests that some portion of the erosion has affected current retirees as well. According to the 1994 CPS supplement, 2 percent of retirees—about 40,000 individuals—became ineligible for continued retiree coverage after their employers amended their plans. Aggregate data on the erosion in retiree health coverage obscure significant differences among firms of varying sizes and types of industry. As noted earlier, the larger the firm, the more likely it is to offer health benefits to both active and retired workers. However, the decline in offer rates to retirees, as reflected in figure 3.1, is not restricted to firms at the lower end of the size spectrum reported on. Foster Higgins reports that employers with 5,000, 10,000, and even 20,000 or more employees have also shown a decline. Surprisingly, the decline for the largest of firms has been uninterrupted; employers with 500 or 1,000 workers, on the other hand, have shown more variability, and, according to Foster Higgins, an increase in the offer rate. According to Foster Higgins, jumbo firms employing at least 20,000 workers are more than twice as likely as smaller firms to offer early retiree health insurance. Thus, 69 percent of jumbo firms offered early retiree coverage in 1997 compared with 31 percent of firms with between 500 and 999 employees. However, just 4 years earlier, 84 percent of jumbo firms reported that they offered retiree health benefits. With one exception, Foster Higgins reported that early retiree coverage has declined between 9 and 20 percentage points among firms of all sizes since 1993. For firms with between 1,000 and 4,999 workers, however, the offer rate for early retiree health insurance increased by as much as 10 percentage points, but by 1997 was only 1 percentage point higher than in 1993. As with the overall trend data shown in figure 3.1, Peat Marwick reported more variability by firm size, especially in the 1992 to 1995 time frame, with most firm sizes showing an increased offer rate in 1995. One benefit consultant we met with was very skeptical about the Foster Higgins trend data for firms with 1,000 to 4,999 workers, suggesting that the increase represented health benefits related to early retirement incentive programs. Foster Higgins data indicate that the offer rate for early retiree coverage declined among most industry categories between 1993 and 1997. Government, the only category showing an increase, was among the most likely to offer such benefits in the first place. An increasing share of the labor force works for firms from the service sector and a decreasing share works for firms in the manufacturing and transportation sectors. The former are less likely to provide their workers with retiree health benefits. As noted in chapter 2, a person’s utilization of health care services tends to increase with age. Consequently, providing health benefits to retirees is much more expensive than covering younger workers. However, because Medicare is the primary payer for beneficiaries 65 and older, employer costs for retirees drop dramatically once they become Medicare-eligible. Thus, early retirees are about three times as expensive for an employer as retirees enrolled in Medicare. Because of the significant cost differences between early and Medicare-eligible retirees, the proportion of early retirees in the mix of retirees can dramatically affect an employer’s average per-retiree cost. Overall, about 75 percent of retirees in 1994 were over age 65, and thus any employer-based coverage supplemented Medicare benefits; the remaining 25 percent were early retirees not yet eligible for Medicare. Since 1993, both Foster Higgins and Peat Marwick have reported on the average employer cost for early retiree health coverage. For firms that could distinguish between the cost of retirees and active workers, Foster Higgins indicated that the average annual early retiree premium in 1996 was $5,210, having shown almost no change since 1993. Costs fell slightly to $4,985 in 1997, a drop attributed to increased HMO enrollment among early retirees. Foster Higgins does not report on cost variation for early retiree coverage by firm size, region, or industry. Peat Marwick reported that average annual costs for early retirees declined between 1993 and 1995, falling from $5,748 to $5,460. It attributed the decrease to the overall slowdown in inflation in the private sector and to the growth in managed care enrollment among early retirees. As shown in table 3.5, however, costs varied considerably by firm size, industry, and region. Thus, the average early retiree premium in 1995 ranged from a low of $4,500 in the health care industry to a high of $6,180 among finance firms. Peat Marwick’s 1997 report did not include comparable data. The cost escalation of the 1980s and early 1990s stimulated employers to become more aggressive in controlling the growth in their health care expenditures. Coincidentally, as was discussed earlier in this chapter, new accounting rules also made employers more conscious of the costs associated with offering retiree health benefits. Though the reaction of some employers was to discontinue or to not offer retiree coverage, those that still provide such benefits have often changed the terms under which they are offered. The objective, as with a similar restructuring of active workers’ benefits, was to help control costs. Three commonly cited changes involve increasing cost sharing, changing eligibility requirements, and reshaping plan choice. While employers have been increasing cost sharing and reshaping plan choice for both active workers and retirees, changes in eligibility requirements generally have been confined to retirees. Those eligibility changes, however, may also have cost-sharing implications. Active management of health benefit costs for retirees focused initially on the costs associated with future retirees—an outgrowth of litigation in the 1980s that made firms more cautious about changing health benefits for individuals who are already retired. In order to avoid court challenges over benefit changes, employers began to explicitly reserve the right in plan documents to modify those benefits—for both future and current retirees. Today, virtually all employers have done so. Often, older groups of retirees were grandfathered into existing, more generous, health plans and changes were only applicable to new hires or individuals who retired after a certain date. In 1992, one researcher estimated that the benefits of about two-thirds of retirees with employer-based coverage seemed secure because they became effective before employers added escape clauses reserving the right to make subsequent changes. However, the 1998 decision in the case brought by General Motors salaried retirees may call into question any commitment by employers to provide previously promised retiree health benefits. According to benefit consultants and employers, many of the modifications made to retiree health plans date from the late 1980s and early 1990s. Employer surveys, as well as our interviews with a judgmental sample of large companies, suggest that firms are continuing to make changes to reduce their overall liability for retiree health care costs—changes that they attribute to their competitive or financial situations. Despite the poor quality of the data available to assess the impact of coverage changes, the bottom line is that future retirees will (1) pay more for coverage and (2) find it harder to become eligible for benefits. And retiree surveys suggest that higher costs for individuals could lead to lower participation rates in employer-based retiree health benefits when such coverage is available. Each year, Foster Higgins tracks the changes made in the past 2 years by large firms that offer retiree coverage. Table 3.6, which summarizes selected changes reported since 1993, suggests that popular cost-control methods are (1) increased retiree cost sharing—both the percentage of premium paid by retirees and the amount of copayments and deductibles, (2) tightened eligibility rules for participating in the employer-based health plan, and (3) provision of a fixed (defined) employer contribution toward the cost of retiree health insurance in lieu of covering whatever medical services are used during the year (often referred to as a defined benefit). More recently, employers have attempted to control costs by moving retirees into managed care plans. Additional cost-control measures noted in other employer surveys include lower limits on the total amount of health care costs that will be covered during the lifetime of the retiree and capping employer contributions—a step that may be the prelude to introducing a defined contribution. A 1992 survey conducted by William Mercer suggests that though cost-control changes are being implemented for both current and future retirees, they are often directed at the latter. Further evidence for the tendency of employers to target future retirees is found in data reported by Peat Marwick. Between 1992 and 1993, the percentage of firms that grandfathered current retirees into plans different from those available to future retirees increased from 20 percent to 47 percent. As noted earlier, employers may find it difficult, despite reservations in plan documents that alert retirees to the possibility of changes, to modify benefits for current retirees because of ethical or public relations concerns. Only limited data are available on the nature of the financial responsibility being shifted to future retirees. Reporting differences make it difficult to judge the consistency of the data across various surveys, and the data’s aggregate nature sometimes obscures the variability of changes among firms. More importantly, the limited results often lack a context for judging their impact on the affordability of increased cost sharing. Income and asset data for the affected retirees would be required for such a study. However, a comparison of reported cost sharing for retirees with trends for active workers does suggest that retirees are being asked to shoulder a higher portion of the health benefits premium when they leave the workforce. Finally, the Labor Department’s analysis of CPS supplements suggests that retiree participation rates have already been affected by increased cost-sharing requirements. Typically, surveys report on the extent to which retirees or firms are responsible for the cost of health benefits, that is, whether the cost is shared or whether the firm or employee is responsible for all of the cost. Given the reported shift in costs from employers to retirees, one would expect the data to show that fewer employers are paying the entire cost of coverage and more retirees are paying the whole premium themselves. A comparison of data on employer-retiree cost sharing from three different surveys, however, demonstrates that the proportion of retirees responsible for the entire premium has been relatively steady or may have actually decreased. On the other hand, two of these surveys show that fewer employers pay the entire premium, suggesting that costs are not being shifted entirely to the retiree but are being shared. Compared with active workers, retirees with employer-based coverage do appear to be shouldering responsibility for a higher portion of the overall premium. Peat Marwick reported that active employee contributions for family coverage increased from 26.6 percent in 1993 to 32.4 percent in 1995. In contrast, early retiree contributions for family coverage rose from 39 percent to 45 percent over the same time period. Thus, on average, early retirees in 1995 were contributing about $2,340 annually toward the cost of family coverage—about $655 more than active workers. Appendix V uses income data from the March 1997 CPS to estimate the percentage of total family income that a 55- to 64-year-old would have to commit to cost sharing under employer-based coverage using 1995 Peat Marwick estimates of the lowest, highest, and average retiree contribution. The average retiree contribution is 4.7 percent of the 1996 median family income of 55- to 64-year-old married couples. On average, Americans under age 65 spent about 4 percent of household income in 1994 on health care—an amount that includes not only insurance premiums or employer-required cost sharing but also out-of-pocket expenses for copayments, deductibles, and services not covered by health insurance. As shown in table 3.7, costs varied considerably by firm size, type of industry, and region. Department of Labor analyses of CPS supplements indicate that factors other than the actual availability of coverage account for an undetermined portion of the decline in retirees with employer-based health benefits. According to the Labor Department, the propensity for retirees to enroll in employer-based plans when they are offered has dropped because of the increased costs retirees are being asked to shoulder by employers. In both the 1988 and 1994 surveys, individuals who declined employer-based coverage at retirement were asked to articulate the reasons for their decision. Of the approximately 5.3 million retirees who discontinued employer-based benefits in 1994, an estimated 27 percent cited the expense as a factor—an increase from 21 percent who cited this reason in the earlier survey. Moreover, there was a 6-percentage-point increase over the same time period in the number of such retirees who indicated that they still had health insurance through a plan other than that of their former employer. Thus, some retirees who find coverage from their own employer too expensive may be switching to plans with lower cost sharing available through a working or retired spouse. Traditionally, employer-based health benefits have been an open-ended commitment by employers to pay for covered medical services. The liability represented by such a commitment as well as the escalating costs of medical services over time has stimulated employers to look for ways to limit their financial obligation, or at least to make it more predictable. The shift toward capitated health plans represents one approach. Another technique is for an employer to translate the benefit offered into a cash value either by instituting an aggregate cap on expenditures or by offering retirees a fixed cash benefit. Such an approach is often referred to as a defined contribution. Though several surveys—notably Hewitt (1997) and Mercer (1992)—have addressed the issue of employer caps, others such as Foster Higgins and Peat Marwick have limited data on this phenomenon.The following Hewitt data must be considered with the recognition that it is largely based on information from clients and as a result may overstate the prevalence of employer dollar caps. According to Hewitt, employers began to introduce dollar caps on their future retiree health obligations in the early 1990s, largely in response to new accounting rules that require them to report the accrued obligation for retiree health benefits. Few large employers had such caps in 1991, but by 1996, 36 percent had some form of dollar cap on their subsidy for early retirees, and 39 percent had caps for post-age-65 retiree coverage. Hewitt reports that the caps can take many forms, including caps on total costs: the company will not spend more in total for retiree health coverage than twice what was spent as of a certain date; per capita caps: the subsidy per person will not exceed a fixed amount; caps with service component: the employer share is fixed at a specified dollar amount that is then multiplied by years of service. Hewitt suggests that many employer caps on retiree health expenditures are fixed dollar caps without a built-in adjustment for inflation. Since a fixed-dollar cap dramatically reduces a firm’s liability for retiree coverage by shifting the responsibility for future cost increases to retirees, Hewitt believes that there will be significant pressure to revisit these expenditure limits in the future. However, if the caps are not adjusted, retirees will shoulder any future cost increases. Hewitt emphasized that the dollar caps introduced since 1991 are largely intended to control “accounting costs” for purposes of FAS 106. A variation on an expenditure cap is a maximum lifetime benefit. In 1994, Peat Marwick reported that some employers had more restrictive maximum lifetime benefits for their retiree population. Thus, compared with 57 percent of active workers, only 47 percent of retirees have no maximum lifetime benefit or one that is equivalent to $1 million or more. On the other hand, Peat Marwick also reported that retiree lifetime limits were increased for 38 percent of retirees in 1993, with only 2 percent of retirees receiving a decrease. Employers have used changes in participation rules to reduce their liability for retiree health coverage and to differentiate their treatment of workers with varying lengths of service. While the cost implications of these new eligibility rules are clear for employers, their impact on the affordability of coverage is less so. Moreover, changes in labor force mobility could result in fewer active workers ever qualifying for a benefit that is, at the same time, becoming less widely available. In the past, retiree health coverage was treated as a benefit that accrued at retirement. Under those eligibility rules, workers with only a few years of service and those with many years were often treated equally. Because retirement was the only test, the responsibility and cost of a retiree’s health care were borne fully by the last employer. More recently, employers have modified their eligibility requirements by tying them to years of service. The three most common methods employers use to determine eligibility for retiree health benefits are (1) length of service, (2) age, or (3) some combination of the two. Peat Marwick has reported that the proportion of retirees enrolled in plans with both a minimum service and age requirement increased from 56 to 79 percent between 1992 and 1997. In 1996, Foster Higgins reported that the most common service and age requirements were 10 years and 55 years old, respectively. When the requirement is the sum of age and service, Foster Higgins indicated that firms commonly require 75 “points.” For example, an individual at age 55 with 20 years of service would receive 75 points. More stringent eligibility requirements have potentially serious implications for future retirees. First, if workers change jobs frequently, especially as they become older, they may not qualify for retiree health benefits at all. In 1994, 2 percent of workers (over 100,000 individuals) who did not continue employer-based coverage into retirement reported that they failed to meet either the age or the service requirement or some other prerequisite. Second, full health benefits may not accrue at retirement. Thus, some employers tie cost sharing to years of service. For example, an official we interviewed at one company said the company requires 35 years of service to qualify for the maximum employer contribution—75 percent. Retirees with only 19 years of service qualify for a substantially lower employer contribution—30 percent. Many large employers adopted a managed care strategy in the late 1980s to help combat double-digit health care inflation. Thus, between 1987 and 1996 managed care enrollment in employer-sponsored health plans nearly tripled, from 27 percent to 75 percent, and has continued to grow. Until more recently, elderly Americans have lagged behind younger age groups in the extent to which they are enrolled in managed care, but this situation appears to be changing rapidly, especially in the case of early retirees. It is not clear what is accelerating the move of early retirees into managed care. Cost sharing and lack of choice may both be contributing, but we do not know how much. In 1996, Foster Higgins reported that the movement of retirees into managed care is helping to slow down the overall growth in employers’ health insurance costs. By 1996, over half of covered early retirees were enrolled in a managed care plan—either a preferred provider organization (PPO), a point-of-service (POS) plan, or an HMO. Only 1 year later, managed care enrollment had grown to 70 percent, largely because of the increase in the number of early retirees joining HMOs. Foster Higgins attributed decreased costs for early retirees of 4.3 percent in 1997 to the jump in HMO enrollment. Table 3.8 compares early retiree health plan enrollment for 1996 and 1997 with that of active workers. According to Foster Higgins, the transition of early retirees into managed care plans has been even more rapid than the earlier shift by active workers. It is not obvious what is motivating early retirees to move so quickly into managed care plans such as HMOs. Clearly, the fact that employers have reserved the right to make changes to early retiree health benefits has increased employers’ flexibility, allowing them to manage the cost of those benefits much as they do for active workers. Moreover, some large employers no longer view early retirees as an extension of their active employee population but recognize that the per capita costs of early retirees make them the most expensive component of their overall health benefit costs. In the case of active workers, employers recognized that financial incentives could be an important tool in encouraging managed care enrollment. Thus, in a 1997 report, we noted that some large employers now vary their subsidy according to the cost of the coverage option, making it cheaper for a worker to enroll in a managed care plan.Interviews with a sample of large employers suggest that some firms are applying this same technique to early retirees. Thus, in one industry, early retirees are now in a separate risk pool, with premiums 30 to 40 percent higher than for active workers. These higher costs are passed on through the cost-sharing formula to early retirees who choose a non-HMO product. However, for an early retiree who selects a community-rated HMO, the cost is the same as that for an active employee. As a growing number of employers reduce or eliminate their support for retiree health benefits by scaling back premium contributions or increasing cost sharing, many affected retirees look to the individual market for coverage until they become eligible for Medicare. Also, access to affordable coverage in the individual insurance market is a concern for those 55- to 64-year-olds who have primarily relied on this market for coverage, including some of those who are self-employed and those who were guaranteed access to an individual product under HIPAA. As demonstrated by our March 1997 CPS analysis, the near elderly already rely on the individual market to a greater extent than younger Americans. However, many of the near elderly may encounter difficulty in obtaining a comprehensive plan at a reasonable price or in obtaining any plan at all. Significant differences exist between the individual and employer-sponsored health insurance markets, and these differences may have significant implications for some consumers. In the individual market, the near elderly must choose from among a number of complex products and pay for the entire cost of coverage. For employer-based coverage, the burden of selecting and paying for the products is significantly eased by employer contributions and payroll deductions. Although states and the federal government have undertaken a wide range of initiatives to increase access to the individual market, the ability of carriers in many states to continue to charge higher premiums to applicants who are older or who have certain health conditions may have particularly adverse effects on those aged 55 to 64. These individuals may be denied coverage, may have certain conditions or body parts excluded from coverage, or may pay premiums that are higher than the standard rate, depending on demographic characteristics or health status. Purchasing insurance through the individual market can be a complex process for even the most informed consumer. However, it may pose a considerable challenge for 55- to 64-year-olds who have previously depended on their employer for coverage. In addition to the multiple ways the near elderly may access the market, such as through agents or associations, they are confronted with products offered by dozens or even a hundred or more different carriers. Once they choose a carrier and a product, consumers must then select among a wide range of deductibles and other cost-sharing options. In our November 1996 report, we found that in the seven states we visited, consumers, including the near elderly, could choose from plans offered by no fewer than 7 to well over 100 carriers. While the number of carriers operating in states may vary significantly, it is important to recognize that fewer carriers do not necessarily equate to fewer choices for consumers. For example, over 140 carriers in Illinois may offer individual products, but these products are not available to all consumers because of medical underwriting. In contrast, New Jersey has 27 carriers offering one or more comprehensive products to which every individual market consumer in the state is guaranteed access. In contrast to employer-based group insurance, individuals may choose from multiple cost-sharing arrangements and are generally subject to relatively high out-of-pocket costs. Under employer coverage, the range of available deductibles is narrower, and total out-of-pocket costs are capped at a lower level than under most individual market products. For example, for non-HMO plans offered by medium and large employers, annual deductibles are most commonly between $100 and $300, and a significant percentage have no deductible. In contrast, annual deductibles in the individual market are commonly between $250 and $2,500. The cost-sharing arrangement selected by the consumer is a key determinant of the price of an individual insurance product, and the higher the potential for out-of-pocket expenses, the lower the premium. In November 1996, we reported that carrier and insurance department representatives thought that the level of consumer cost sharing had been increasing in recent years, reflecting consumers’ efforts to keep premiums affordable. A representative of one national carrier said that among its new enrollees in 1995, 40 percent chose $500 deductibles, 50 percent chose $1,000 deductibles, and the remaining 10 percent chose deductibles ranging from $2,500 to $10,000. Also, individual market reforms enacted in New Jersey originally limited carriers to offering only standard plans with deductibles of $150, $250, $500, or $1,000 and with prescribed ranges of cost-sharing options. An insurance department official said that because consumers showed little interest in the lower-deductible plans, New Jersey no longer offers the $150 and $250 deductible options for new individual insurance applicants. Instead, beginning on September 1, 1997, the state offers deductibles of $1,500, $2,250, $2,500, $3,000 and $4,500 in addition to the original $500 and $1,000 deductible options. In fact, the official said that consumers requested a deductible option of $5,000. If the $2,500 option proves to be popular, the official said the state would consider introducing plans with larger deductibles in the future. Certain aspects of the individual insurance market, such as restrictions on who may qualify for coverage and the premiums charged, can have direct implications for consumers seeking to purchase coverage, especially those who are retired but not yet eligible for Medicare. These aspects of the individual market are often exacerbated by the fact that individuals must absorb the entire cost of their health coverage, whereas employers usually pay for the majority of their employees’ coverage. A consumer may not find affordable coverage, or may find coverage only if it is conditioned upon the permanent exclusion of an existing health condition. Unlike the employer-sponsored market, where the price for group coverage is based on the risk characteristics of the entire group, premium prices in the individual markets of most states are based on the characteristics of each applicant. To determine rates in both markets, carriers commonly consider age, gender, geographic area, tobacco use, and family size. For example, on the basis of past experience, carriers anticipate that the likelihood of requiring medical care increases with age. Consequently, a 57-year-old in the individual markets of most states pays more than a 30-year-old for the same coverage. In the group market, however, this older individual would usually pay the same amount as the other members of the group, regardless of the individual’s age. Table 4.1 demonstrates for selected carriers the range in premiums charged in the individual markets of four states to applicants based solely on differences in their ages. The low end of the range represents the carrier’s premium for a 24-year-old nonsmoking, male applicant, while the upper end of the range indicates the premium price charged for the same coverage to a nonsmoking male applicant aged 60. Depending on the carrier and the plan chosen, a 60-year-old could pay over four times more than the younger applicant for the same coverage. Where no state or federal restrictions apply, a carrier may also evaluate the health status of each applicant to determine whether it will increase the standard premium rate, exclude a body part or an existing health condition from coverage, or deny coverage to the applicant altogether. This process is called medical underwriting. A carrier may deny coverage to applicants determined to be in poorer health and more likely to incur high medical costs. Individuals with serious health conditions such as heart disease are virtually always denied coverage. Similarly, those with such non-life-threatening conditions as chronic back pain and varicose veins may be denied coverage. The most recent declination rates for carriers with whom we spoke ranged from zero in states where guaranteed issue is required to about 23 percent. Carriers in those states that do not prohibit medical underwriting typically deny coverage to about 15 percent of all applicants. These declination rates could be understated for two reasons. First, the rates do not take into account carriers who attach riders to policies to exclude certain health conditions or carriers that charge unhealthy applicants a higher, nonstandard rate for the same coverage. Thus, although a carrier may have a low declination rate, it may attach such riders and charge higher, nonstandard premiums to a substantial number of applicants. For example, while one carrier with whom we spoke declines only about 15 percent of all individual applicants, it attaches exclusionary waivers to the policies of 38 percent of the non-HMO applicants it accepts. Thus, persons with chronic back pain, glaucoma, or diabetes may have all costs associated with the treatment of those conditions excluded from coverage. Insurance agents are also generally aware of which carriers medically underwrite and have a sense as to whether applicants will be accepted or denied coverage. Consequently, they will often deter individuals with certain health conditions from applying for coverage from certain carriers. When this occurs, the declination rate is not an accurate indicator of the proportion of potential applicants who are ineligible for coverage. The ability of carriers in some states to underwrite applicants may have the most adverse effects on those aged 55 to 64. Because of the existence of certain health conditions, many of these individuals have retired or work only part time, and consequently, may have fewer resources with which to purchase insurance. For these individuals, carriers’ underwriting practices may often result in premiums priced prohibitively high, or even worse, denial of coverage altogether. As discussed, without state restrictions that prohibit the practice, carriers generally base premium rates on the demographic characteristics and health status of each applicant. Table 4.4 demonstrates premium price variation stemming from age differences and includes examples of what the near elderly with varying health conditions might experience in terms of availability and affordability of coverage in the individual insurance markets of these states. The baseline is the monthly premium charged to a healthy 25-year-old male. Because carriers anticipate that the likelihood of needing medical care increases with age, all carriers in the states listed except those that were prohibited by law from doing so charged higher premiums to older applicants. For example, an Arizona PPO plan costs a 25-year-old male $66 a month and a 64-year-old male $253 for the same coverage, a difference of $187. Similarly, a 64-year-old male would have paid $286 more than the 25-year-old male for a PPO product from one Illinois carrier. As the table indicates, all applicants in New Jersey, New York, and Vermont, regardless of age, would pay exactly the same amount for the same insurance coverage from the same carrier. The individual insurance reform legislation enacted in these states requires community rating, a system in which the cost of insuring an entire community is spread equally among all members of the community, regardless of their demographic characteristics or health status. Given the median income of the near elderly, rates in the individual market may pose an affordability problem to some. For example, the premiums for popular health insurance products in the individual markets of Colorado and Vermont are at least 10 percent and 8.4 percent, respectively, of the 1996 median family income of married near-elderly couples. Typically, Americans under age 65 spent about 4 percent of household income in 1994 on health care—an amount that includes not only insurance premiums or employer-required cost sharing but also out-of-pocket expenses for copayments, deductibles, and services not covered by health insurance. (See app. V for a comparison of the affordability premiums in the individual market with cost sharing under employer-based coverage.) Without state restrictions, carriers will also evaluate the health status of each applicant to determine whether to charge an increase over the standard premium rate, to exclude a body part or existing health condition from coverage, or to deny the applicant coverage altogether. For example, while four of the carriers automatically deny coverage to an applicant with preexisting diabetes or exclude from coverage all costs associated with treating this condition, one carrier will accept the applicant but will charge him or her a significantly higher premium to cover the higher expected costs. Also, an applicant who had cancer within the past 3 years would almost always be denied coverage from all carriers we interviewed except those in the guaranteed-issue states of New Jersey, New York, and Vermont. In non-guaranteed-issue states, applicants who have a history of cancer or other chronic health conditions are likely to have a difficult time obtaining coverage. Since the near elderly are more likely to use medical services and develop such conditions as they grow older, they may have an even more difficult time accessing coverage in the individual markets of certain states. However, high-risk insurance pools have been created in a number of states and act as a safety net to ensure that otherwise uninsurable individuals can obtain coverage, although at a cost that is generally 125 to 200 percent of the average or standard rate charged in the individual insurance market for a comparable plan. Although the near elderly in Colorado, Illinois, and North Dakota who are denied coverage from one or more carriers may obtain coverage through the high-risk pool, they may be required to pay $316 to $638 more each month for this coverage. Arizona is the only state that we examined that did not have either guaranteed issue or a high-risk pool. The near elderly in this state, especially if they are unhealthy, are not guaranteed access to any insurance product and consequently may become uninsured. Most states and the federal government have undertaken a wide range of initiatives to increase access to the various segments of the health insurance market. While almost all states have enacted reforms designed to improve access to small employer health insurance, they have been slower to introduce similar reforms to the individual market. In our 1996 report, we noted that some states (1) had passed reforms designed to, among other things, improve portability, limit waiting periods for coverage of preexisting conditions, and restrict rating practices in the individual market; and (2) operated high-risk insurance pools to provide a safety net for otherwise uninsurable individuals. In addition, certain states had provided all individuals a product on an open enrollment basis through their Blue Cross and Blue Shield plan. Nevertheless, as many as six states may have no insurance rating restrictions, an operational high-risk pool for which all in the state are eligible, an insurer of last resort, or any method through which all individuals are guaranteed access to an individual insurance product. Also, a number of state and federal laws guarantee individuals leaving employer-sponsored group health plans access to continued coverage and, ultimately, to a product in the individual market. First, similar to COBRA, some states extend continuation requirements to groups of fewer than 20, and several states require carriers to offer individuals a product comparable to their group coverage on a guaranteed-issue basis. HIPAA further guarantees access to individual market coverage for eligible individuals leaving group health plans. This group-to-individual portability is only available to eligible individuals who have exhausted their available COBRA or other conversion coverage and who meet several other eligibility criteria. HIPAA, however, does not explicitly restrict the premiums carriers may charge, nor does its guarantee of coverage extend to those who have always relied on the individual market for coverage. In our 1996 report, we identified 25 states that at the end of 1995 had passed one or more reforms in an effort to improve individuals’ access to this market. Since that time, additional states have enacted reforms. These reforms sought to restrict carriers’ efforts to limit eligibility and charge higher premiums because of an individual’s health history or demographic characteristics. We found substantial variation in the ways states approached reform in this market, although reforms commonly passed included guaranteed issue, guaranteed renewal, limitations on preexisting condition exclusions, portability, and premium rate restrictions. Among all reforms, guaranteed issue and restrictions on premium rates are provisions that most directly affect individuals’ access to this market and the affordability of the products offered to them. Guaranteed issue requires all carriers that participate in the individual market to offer at least one plan to all individuals and accept all applicants, regardless of their demographic characteristics or health status. See appendix VII for an updated summary of state initiatives to increase access to the individual market. In our 1996 report, we found that 11 states required all carriers participating in this market to guarantee-issue one or more health plans to all applicants. Since that time, we have identified an additional two states that require carriers to guarantee-issue selected products. Such a provision, however, does not necessarily guarantee coverage to all individuals on demand. To limit adverse selection, carriers in most states do not have to accept individuals who are eligible for employer or government-sponsored insurance, and in some states carriers are only required to accept applicants during a specified, and usually limited, open enrollment period. Twenty of the states that have passed some reform in the individual market included a provision in their legislation that attempts in some way to limit the amount carriers can vary premium rates or the characteristics they may use to vary these rates. This number represents an increase of 2 states (Massachusetts and South Dakota) from the 18 we previously had identified. Most of these states allow carriers to vary, or modify, premium rates charged to individuals within a specified range according to differences in certain demographic characteristics such as age, gender, industry (type of employment), geographic area, and use of tobacco. For example, while New Hampshire only allowed carriers to modify rates on the basis of age, South Carolina allowed carriers to use differences in age, gender, geographic area, industry, use of tobacco, occupational or avocational factors, and any additional characteristics not explicitly specified, to set premium rates. Most of the 20 states, however, limit the range over which carriers may vary rates among individual consumers. In fact, at least three of these states require carriers to community-rate their individual products, with limited or no exceptions. Under community rating, carriers establish premiums at the same level for all plan participants, regardless of their age, gender, health status, or any other demographic characteristic. See appendix VIII for a description of the rating restrictions in the states that have passed such reforms. In addition, at least 27 states have created high-risk insurance programs that act as a safety net to ensure that individuals who need coverage, including the near elderly, can obtain it. However, the cost is generally 125 to 200 percent of the average or standard rate charged in the individual insurance market for a comparable plan. To qualify for the high-risk pool, applicants usually have to demonstrate they have been rejected by at least one carrier for health reasons or have one of a number of specified health conditions. These high-risk pools, however, have historically enrolled a small number of individuals. In all but one of the states with such pools, less than 5 percent of those under age 65 with individual insurance obtain coverage through the pool. Only in Minnesota does enrollment in the pool approach 10 percent of the individually insured population. The relatively low enrollment in these pools may be due in part to limited funding, their expense, and a lack of public awareness. For example, California has an annual, capped appropriation to subsidize the cost of enrollees’ medical care and curtails enrollment in the program to ensure that it remains within its budget. Also, although these programs provide insurance to individuals who are otherwise uninsurable, they remain relatively expensive, and many people are simply unable to afford this coverage. In addition to the states that require all carriers to guarantee-issue at least one health plan to all individuals, the Blue Cross and Blue Shield plans in eight states and the District of Columbia offer at least one product to individuals during an annual open enrollment period, which usually lasts 30 days. Although these plans accept all applicants during the open enrollment period, they are not limited in the premium they can charge an individual applicant. For individuals not eligible for guaranteed access to individual market coverage under HIPAA, these plans may provide their only source of coverage. Our analysis also showed that at the end of 1997, six states had passed no reforms that attempted to increase the access of all persons to the individual insurance market (for example, guaranteed issue and premium rate restrictions), had no operational high-risk pool for which all individuals in the state were eligible for coverage, and had no Blues plan that acted as insurer of last resort. In these states, individuals who are unhealthy and not eligible for coverage under HIPAA, and thus most likely to need insurance coverage, may be unable to obtain it. These states are Alabama, Arizona, Delaware, Georgia, Hawaii, and Nevada. Through HIPAA, signed into law on August 21, 1996, the Congress sought to provide a set of minimum protections that would apply to all states and to coverage sold in all insurance markets. ERISA exempts self-insured employer group plans, which cover about 40 percent of all insured workers, from the insurance reforms passed by most states; since HIPAA established federal standards, they apply to such self-funded firm plans. HIPAA guarantees those leaving group coverage access to coverage in the individual market—“group-to-individual portability”—under certain specified circumstances. This guarantee applies to those who had at least 18 months of aggregate creditable coverage, most recently under a group plan, and without a break of more than 63 days, and who have exhausted any COBRA or conversion coverage available. Individuals who meet these criteria are eligible for guaranteed access to coverage, regardless of their health status and without the imposition of coverage exclusions for preexisting conditions. However, only about 11 percent of those who elect COBRA coverage remain enrolled for the maximum period. Furthermore, HIPAA offers no guaranteed access to the individual market for retirees whose benefits were terminated before its July 1, 1997, implementation or to those who have traditionally relied on the individual market for coverage. To meet HIPAA’s group-to-individual portability requirement, states could choose between two approaches, the “federal fallback” and “alternative mechanism” approaches. Under the federal fallback approach, which HIPAA specifies and which 13 states are using, carriers must offer eligible individuals (1) all their individual market plans, (2) their two most popular plans, or (3) two representative plans—a lower-level and a higher-level coverage option. The remaining 36 states and the District of Columbia chose an alternative mechanism under which the law allows a wide range of approaches as long as certain requirements are met. Twenty-two states decided to use their high-risk pool as their alternative mechanism. Under the federal fallback approach, HIPAA does not explicitly limit the premium price carriers may charge eligible individuals for coverage. In fact, we recently reported that in several of the 13 states using the federal fallback approach, the premium prices charged to HIPAA-eligibles ranged from 140 to 400 percent or more of the standard premium. Similar to the experience of non-HIPAA-eligibles who rely on the individual market for coverage, carriers in the federal fallback states typically evaluate the health status of applicants and offer healthier HIPAA-eligibles access to standard products. Although these products may include a preexisting condition exclusion period, they may cost considerably less than the HIPAA product and will likely attract the healthier individuals. Unhealthy HIPAA-eligibles in these states may have access to only the guaranteed access product, and some may be charged an even higher premium on the basis of their health status. However, a similarly situated individual who was not eligible for a HIPAA product may still be denied coverage or have certain conditions excluded from coverage. So, while an early retiree whose employer eliminated coverage would typically be eligible for one of these guaranteed access products, no similar guarantees of access to coverage exist for those who historically have relied on the individual market as their sole source of coverage. These individuals may still encounter significant obstacles in their efforts to obtain an individual insurance product. In comparison, individuals in the 22 states that will use a high-risk pool as their alternative mechanism to comply with HIPAA may face less steep premium prices than those in the federal fallback states, regardless of their particular health status. Coverage through a high-risk pool typically costs more than standard coverage, but state laws limit the premiums carriers may charge, generally at a cost that is 125 to 200 percent of the average or standard rate charged. Although a company’s decision to offer health coverage to workers is essentially voluntary, legislation enacted in 1986 mandates the temporary continuation of employer-based benefits under certain circumstances. Such continuation coverage is known by the acronym COBRA. The mandate applies only to firms with 20 or more workers that choose to offer coverage, and the mandate ceases to apply if an employer terminates health benefits. Though available to the near elderly, COBRA was targeted at a broader group. Thus, continuation coverage extends participation in employer-based group coverage for individuals of all ages who experience a transition resulting in the loss of health benefits, such as unemployment, retirement, death of a spouse, or divorce. The legislation was enacted in response to increasing concern about the large number of Americans who lack health insurance. Those who elect COBRA are responsible for the entire premium plus a 2-percent surcharge to cover associated administrative expenses. Although the mandate does not oblige firms to share in the cost of continuation coverage—a major difference from most employer-based health benefits, which are commonly heavily subsidized—employers contend that there is an implicit subsidy because sicker, more costly individuals are likely to elect COBRA. Categories of the near elderly who might potentially benefit from continuation coverage include those who (1) are laid off, (2) experience a cutback in hours that makes them ineligible for health benefits, (3) retire, or (4) are younger spouses of individuals who become eligible for Medicare and thus relinquish employer-based health insurance for their entire family. An attractive feature of COBRA for the near elderly is its ability to temporarily fill the gap in coverage that exists when an employer provides health benefits to active workers but not to retirees. Moreover, COBRA may be used as a bridge to Medicare by individuals who coordinate their retirement age with the eligibility period. Because the employer is not required to pay any portion of the premium, COBRA may be an expensive alternative for the near elderly—especially since the loss in employer-based coverage is probably accompanied by a decrease in earnings. The limited information available on eligibility for and use of COBRA by Americans in general and the near elderly in particular is based on past experience and may not reflect incentives to elect and exhaust continuation coverage created by the implementation of HIPAA. Moreover, the information leaves many important questions unanswered. In general, the near elderly appear to be more likely to elect COBRA than younger age groups. Analysis of two studies that examined data from special CPS supplements suggests that COBRA use by the near elderly in 1988 and 1994 was relatively small compared with the size of this age group. On the one hand, these estimates represent the lower boundary of COBRA use by the near elderly since neither includes both retired and nonretired 55- to 64-year-olds. On the other hand, both may overestimate the use of continuation insurance, since employers have told us that some individuals only elect COBRA to receive dental or vision coverage—benefits that are not always offered to those with access to employer-based retiree health insurance. A proprietary database whose results cannot be generalized to the whole population suggests that, on average, 61- to 64-year-olds only keep continuation coverage for a year. Finally, although there is a strong rationale for those near elderly who lack an alternative source of coverage and who can afford the premium to elect COBRA, there is no systematically collected evidence on the extent to which such elections affect employer costs. The terms and conditions of COBRA eligibility are complex, in part because of (1) its broad scope, (2) the fact that it addresses coverage for individuals and families whose connection to an employer has been broken, and (3) the protections for enrollees built into the election process. There are two broad categories of qualifying events under COBRA, with the coverage period linked to the type of event: Work-related. Voluntary separation, including retirement; involuntary separation other than for gross misconduct; or a decrease in the number of hours worked that results in loss of heath insurance. Family. Divorce or legal separation from or the death of an insured worker, Medicare entitlement for a covered employee resulting in the loss of employer-provided coverage to a dependent, or loss of dependent child status. Generally, a work-related event provides benefits for 18 months. However, in the case of separation or reduction in hours as a result of a disability, coverage can be extended for an additional 11 months if the disability is determined under the Social Security Act and existed during the first 60 days of COBRA coverage. The cost for those additional 11 months rises from 102 percent to 150 percent of the applicable premium. Dependents are also eligible for the full 29 months of coverage. For those who qualify on the basis of family events, coverage is available for up to 36 months. Finally, in the case of multiple qualifying events, coverage is limited to 36 months. Three factors make COBRA administration complex for firms: the lack of personnel departments at smaller firms, the detachment of enrollees from the active workforce, and the election time frames and notification requirements. COBRA eligibility rules must be implemented not only by large firms with established personnel and benefit staffs but also by small businesses where benefit management may be an ancillary duty. Further complicating administration of COBRA is the fact that firms must create systems and procedures for individuals who are no longer on their payroll and who may be more difficult to contact than an employee who reports for work. For example, payroll deductions, the typical means of collecting an employee’s share of the health insurance premium, are not an option for a former worker. Finally, the terms under which an employer must proffer continuation coverage add to the administrative burden. The employer has 14 days to notify individuals that they qualify for COBRA. After notification of eligibility, an individual has 60 days to elect coverage and 45 days to make a retroactive payment for covered health services—benefits that may actually have already been accessed by the enrollee 4 months earlier. As discussed in the following section, some employers are concerned that these election time frames contribute to adverse selection. More than 10 years after the establishment of continuation coverage on a nationwide basis, there is a dearth of systematically collected data on (1) how many individuals are eligible, (2) how many enroll, (3) the demographic characteristics of those who elect coverage, or (4) the average health care costs of COBRA enrollees. Since eligibility is not conditioned on age, the handful of studies on COBRA often examine its use in general rather than focusing on the near elderly. Information from CobraServe, a third-party COBRA administrator, provides insights on the election rates of retirees who become eligible for COBRA compared with younger age groups, but the data are not nationally representative. The only nationally representative data on the use of COBRA by the near elderly are special supplements to the CPS conducted in 1988 and again in 1994.However, because they used different methodologies, the two studies based on these data only provide a rough estimate of COBRA use by 55- to 64-year-olds. According to the CobraServe data, the near elderly appear to be more likely than other age groups to elect COBRA, but the number doing so is relatively small. About 10 percent of the over one-half million workers in the database became eligible for COBRA between October 1, 1990, and September 30, 1991, and approximately 21 percent enrolled. We presumed enrollees to be near elderly if they elected coverage at retirement or when a spouse became eligible for Medicare. Using these assumptions, approximately 1,600 of the 12,536 enrollees were near elderly. The election rates of the near elderly were high—33 percent for retirees and 60 percent for spouses of those who became eligible for Medicare. However, the actual number of near-elderly enrollees was small. For example, only 196 individuals elected COBRA because a spouse became eligible for Medicare. Overall, the election rate of those aged 61 and older was 38 percent, while the election rate for those under age 40 was 17 percent. In addition, the length of the enrollment period was higher for older individuals from 1987 to 1991. The 61- to 64-year-olds used COBRA for an average of 12 months—4 months longer than those aged 41 to 60. Only 11 percent of all beneficiaries remained enrolled for the full 18 to 36 months allowed. Several hypotheses can be offered for the higher election rates by older, compared with younger, individuals. First, the near elderly may be more willing to sacrifice current income to pay the insurance premium, given their greater medical needs. Second, younger workers may have access to health insurance through another family member. Finally, the longer election rates by older individuals suggest that they are less likely than younger Americans to obtain other employment. Analysis of two studies that examined data from special CPS supplements suggests that COBRA use by the near elderly in 1988 and 1994 was relatively small compared with the size of this age group. On the one hand, these estimates represent a lower boundary of estimated COBRA use by the near elderly, since neither study includes both retired and nonretired 55- to 64-year-olds. On the other hand, both may overestimate the use of continuation insurance since employers have told us that some individuals only elect COBRA to receive dental or vision coverage—benefits that are not always offered to those with access to employer-based retiree health insurance. For those who were not retired in 1988 and whose continuation coverage lasted for no more than 36 months, an estimated 443,000 were enrolled in COBRA—about 2 percent of the near elderly. Among those who were retired in 1994 and whose continuation coverage was for no more than 18 months, an estimated 65,000 used COBRA—about 1.5 percent of the 4.4 million retirees in 1994. Employers believe that per capita costs for COBRA enrollees are higher than those for active workers because of adverse risk selection—the propensity of sicker individuals with greater health care costs to elect coverage. Even though the enrollee typically pays the full premium plus an administrative surcharge, employers contend that there is an implicit subsidy in continuation coverage because enrollee costs typically exceed that premium, raising average costs per enrollee. Notwithstanding the concern about higher costs as a result of the COBRA mandate, few employers appear to collect data to substantiate their concerns. Some employers told us that they believe such efforts would be fruitless because COBRA is unlikely to change—in fact, legislative interest appears to be focused on COBRA expansions. And employers point out that demonstrating adverse selection is made all the more difficult by the enrollment growth in capitated health plans, which often lack the claims data necessary to compute average costs for those who elect COBRA. Logic suggests that adverse risk selection, a well-recognized factor in the individual insurance market, may be encouraged by the terms and conditions established for continuation coverage. At the same time, the fact that risk-averse individuals may elect coverage is also relevant to predicting employer costs. The election of COBRA coverage by the near elderly in the absence of other insurance alternatives may, in some instances, reflect an antipathy to living without health insurance, given their greater risk of illness. Since COBRA election is associated with turnover, the demographics of a firm or industry will also have a significant impact on COBRA costs. Taking all these factors into consideration, some analysts have suggested that it is not possible to predict whether COBRA will lead to higher or lower net costs for an employer. The limited quantitative data available tend to highlight the random nature of the high costs often attributed to COBRA. COBRA is an adjunct to employer-based group coverage, but its incentive structure may have more in common with the operation of the individual insurance market. Table 5.1 compares the characteristics of group and individual coverage. While the purchase of an individual health insurance policy is purely voluntary, coverage in the group market is tied to employment. Group insurance rates are often considerably lower than rates in the individual market where, absent state reforms prohibiting the practice, premiums usually reflect the demographic and health characteristics of the purchaser. In contrast to individual rates, employer-based costs typically reflect the experience of the entire group. Thus, there is an inverse relationship between group size and the impact of employees with high health care costs: the larger the group, the smaller the impact (see table 5.2). From the perspective of individuals contemplating the purchase of continuation coverage, the absence of an employer subsidy places COBRA on a par with individual insurance: it is similarly expensive but cheaper than individual coverage because COBRA permits enrollees to maintain the group rate. In summary, the high cost and voluntary nature of COBRA suggest that individuals will go through a personal calculus in deciding whether to elect coverage: Individuals whose expected medical expenses exceed the premium are more likely to elect continuation coverage. Some evidence suggests, however, that factors other than expected medical expenses play a role in who elects COBRA. Thus, some individuals may be risk-averse and willing to pay the high cost of continuation coverage. The near elderly might well be expected to fall into this category. Anecdotal evidence from employers suggests that parents whose children lose dependent child status may also be risk-averse. The health benefit manager at one large company told us that the firm’s well-educated employees understand the value of health benefits, the randomness of catastrophic illness, and the financial consequences of being uninsured. Many of the firm’s COBRA elections are young adults who lose health benefits under their parents’ company policy when they graduate from college. The benefit manager at another firm told us that the COBRA premiums for her son who had just graduated from college were very high but that the financial risk of going without coverage was more worrisome to her than the cost. The CobraServe database referenced earlier indicates that election rates for loss of dependent child status are as high as those for retirees. COBRA election is also influenced by affordability considerations. Since COBRA does not require employers to subsidize the premium, the enrollee is generally responsible for paying the full cost of coverage. For 1997, Mercer/Foster Higgins reported that, on average, the total annual premium for employer-based coverage for an active employee was $3,820. This average cost would represent an enormous increase in out-of-pocket costs for a COBRA enrollee, considering that large employers typically contribute 70 to 80 percent of the premium for active workers. However, aggregate premium data hide the considerable variation in health benefit costs across firms and thus the potential expense to COBRA enrollees. Firm size, benefit structure, locale, and aggressiveness in negotiating rates all affect a company’s health care premiums. At one large, New England-based firm that does not negotiate with health plans but rather accepts a community rate for HMO coverage, we were told that the full premium for family coverage was approximately $5,000 per year; in contrast, the company’s indemnity plan would cost a COBRA enrollee about $12,000 annually. According to the firm’s benefit manager, an individual enrolled in the indemnity plan who became eligible to elect COBRA would not be allowed to select the less expensive HMO option until the next annual open enrollment period. The full premium for family coverage for retiree health plans offered by the Milwaukee-based Pabst Brewing Company ranged from about $5,646 to $7,933 per year. In 1996, Pabst terminated health benefits for 750 early retirees. Since Pabst had paid the total cost of practically all of the health plans it offered to retired workers, the COBRA cost would have come as a rude awakening to affected retirees. Assuming an obligation for such high premiums occurs at a time when individuals eligible for COBRA are undergoing a transition—a transition that may be associated with a reduction in family income. As a result, Marquis and Long hypothesized that COBRA participation will rise with age because of higher liquid assets and because of the need to protect those assets from potentially high health expenditures. Since the cost of COBRA coverage is associated with a particular firm, the demographic profile of a company will affect both its average health care expenditures and the costs associated with COBRA. Thus, a firm with an older workforce that does not offer retiree health benefits or a company with a large number of women in their childbearing years might expect to incur higher expenditures than a firm consisting of young, healthy males. And the number of COBRA enrollees who actually do become pregnant or suffer from an expensive illness associated with old age will raise an employer’s average health insurance costs. There are only limited quantitative data on adverse selection attributable to COBRA. Though this evidence suggests that COBRA enrollees are on average more expensive than active employees, it is insufficient support for a generalizable conclusion. Instead, the evidence tends to underscore the randomness of high-cost cases at a particular firm and the relationship between the demographics of a firm and the number of high-cost cases they experience. Marquis and Long analyzed the cost of individuals who elected continuation coverage at three different firms. Their study found that costs for COBRA enrollees were higher than for active employees in all three plans by amounts ranging from 32 to 224 percent. Adjusting these costs for the demographic characteristics of participants, however, shows that health risk is not always higher among COBRA enrollees. Thus, in one of the firms, the higher cost of COBRA continuation coverage was entirely attributable to demographic differences, especially the much higher proportion of women among enrollees. Adjusting for those differences, COBRA enrollees actually had somewhat lower levels of health care spending than active workers. At a second firm, demographic differences, including the older age of COBRA enrollees, did not explain the higher costs, indicating that those on continuation coverage were indeed poorer health risks than the company’s active employees. In addition, Spencer, a Chicago-based benefit consulting firm, has conducted a survey of COBRA costs and experience among a small sample of firms since 1989. Unlike the Marquis and Long analysis, Spencer does not attempt to distinguish between the impact of health risk and demographics on firms’ costs. Among its limitations, the survey sample is not random and only about 5 percent of firms contacted responded to the questionnaire. The respondents include a mix of small, medium, and large companies with no apparent oversampling of smaller firms, whose size would magnify the impact of adverse selection on their future premiums. Of the limited number of questionnaires returned in 1997 (191), fewer than one-half were able to supply cost data, and six very large employers represented 71 percent of the total COBRA elections. The survey has consistently shown that (1) costs vary radically and unpredictably among employers; and (2) overall, the costs of COBRA enrollees are higher than those of active workers. Since 1991, average COBRA costs have hovered at about 150 percent of active employee costs. The official responsible for the survey told us that he is constantly struck by the randomness of an individual firm’s experience from year to year. Thus, a firm could have 10 COBRA elections during a year and no claims, or one election and $150,000 in associated medical expenditures. In 1997, about 25 percent of respondents reported that COBRA costs were lower than for active workers, while 75 percent reported that COBRA costs were higher. Forecasting the insurance status of future generations of near elderly is inherently risky. Since it is not entirely clear why employers are continuing to reassess their commitment to retiree health insurance, it is possible that unforeseen developments will halt or even reverse the erosion that has occurred over the past decade. Among potential scenarios that could affect the incentives for both employers and near-elderly individuals are (1) a tightening of labor markets as a result of having a smaller active labor force or a low unemployment rate, (2) changes in the tax treatment of retirement income, and (3) a postponement of retirement because of insufficient postretirement income. In addition to events that could affect the erosion in employer-based retiree coverage, use of the HIPAA guaranteed-access provision by eligible individuals may improve entry into the individual market for those with preexisting health conditions who lack an alternative way to obtain a comprehensive benefits package. Depending on the manner in which each state has chosen to implement HIPAA, however, cost may remain an impediment to such entry. Since group-to-individual portability is only available to qualified individuals who exhaust available COBRA or other conversion coverage, HIPAA may lead to an increased use of employer-based continuation insurance. Moreover, additional state reforms of the individual market may improve access and affordability for those who have never had group coverage or who fail to qualify for portability under HIPAA rules. Despite the possibility of countervailing trends, however, the evidence available today suggests that future generations of retirees are less likely to be offered health benefits when they leave the active workforce. With the number of 55- to 64-year-olds estimated to grow from 8 percent of the population today to 13 percent by 2020, the impact, in the absence of affordable and accessible alternatives, could lead to an increase in the number of uninsured near-elderly Americans. At the same time, the evidence also suggests that those with continued access to employer-based retiree health coverage will shoulder more—in some instances significantly more—of the financial burden. Compared with premiums in the individual market, the typical cost-sharing requirements faced by retirees with employer-based coverage today do not appear to be greatly out of line with those faced by active employees. However, cost-sharing policies being implemented by some firms could eventually create affordability problems for those who retain access to employer-based coverage. If more firms base their financial contribution to retiree coverage on years of service, workers who change jobs frequently throughout their careers may find the employer subsidy small in relation to the overall premium. Some experts suggest that the traditional employer-employee contract has already been fundamentally altered, with both parties less likely to view the work contract as a lifelong arrangement. A major unknown that could also affect the continued commitment of employers to retiree coverage is the federal government’s response to the Medicare financing problem—a dilemma created by the imminent retirement of the baby-boom generation. Experts are divided about the impact on employer-based coverage of actions that shift costs to the private sector, such as increasing the eligibility age for Medicare. In responding to Medicare’s financial crisis, policymakers need to be aware of the potential for the unintended consequences of their actions. The Employee Retirement Income Security Act of 1974 (ERISA) covers both the pension and health benefits of most private sector workers. The voluntary nature of these employer-based benefits as well as the manner in which coverage is funded has important regulatory implications. Consistent with the lack of any mandate to provide health benefits, nothing in federal law requires an employer to offer coverage or prevents cutting or eliminating those benefits. In fact, an employer’s freedom to modify the conditions of coverage or to terminate benefits is a defining characteristic of America’s voluntary, employer-based system of health insurance. Moreover, employer-based health benefits are typically funded on a pay-as-you-go basis. In contrast, the sheer magnitude of accumulated employer-employee contributions to retirement funds demonstrates the importance of greater regulation of pension benefits. Thus, ERISA not only requires employers to fund their pension plans but gives employees vested rights upon meeting certain service requirements. Health benefits, on the other hand, are excluded from such funding and vesting requirements. Although ERISA was passed in response to concerns about the solvency and security of pension plans, some of its provisions, including federal preemption of state regulations, also apply to employer-sponsored health coverage. The preemption effectively blocks states from directly regulating most employer-based health plans, while allowing states to oversee the operation of health insurers. ERISA does, however, impose some federal requirements on employer-based health plans. For example, employers must provide participants and beneficiaries access to information about the have a process for appealing claim denials, make available temporary continuation coverage for former employees meet specific fiduciary obligations. While ERISA protects the pension benefits of retired workers at U.S. companies, it offers only limited federal safeguards to retirees participating in a firm’s health benefit plan. ERISA requires companies to make a Summary Plan Description (SPD) available to health plan participants within 90 days of enrolling. For retirees, the SPD that was in effect at the time of retirement is typically the controlling document. The SPD must clearly set out employee rights, including “information concerning the provisions of the plan which govern the circumstances under which the plan may be terminated.” According to Labor, employers are free to cut or terminate health care coverage unless they (1) have made a clear promise of specific health benefits for a definite period of time or for life and (2) have not reserved the right to change those benefits. However, the recent decision in the 1989 case brought by General Motors salaried retirees may call into question any commitment by employers to provide previously promised retiree health benefits. We examined a number of public and proprietary surveys that include information on the near elderly, such as their (1) demographic characteristics and access to insurance; (2) ability to obtain retiree health insurance through a former employer; and (3) likelihood of experiencing certain medical conditions, use of services, and levels of health care expenditures. The surveys we relied on were broad and current, and allowed the most precise estimates. Information on the demographic characteristics of the near elderly and their access to insurance is available through the following national surveys either conducted or financed by the federal government: (1) the March supplement of the Current Population Survey (CPS), (2) the Survey of Income and Program Participation (SIPP), (3) the August 1988 and September 1994 supplements to the CPS, (4) the National Medical Expenditure Survey (NMES), (5) the Medical Expenditure Panel Survey (MEPS), and (6) the Health and Retirement Survey (HRS). Table II.1 compares selected aspects of these six surveys. Continuing survey with respondents interviewed every 4 months. Every 2 years. succeeded by MEPS. New panels established annually. Nationally representative, cross-sectional. Continuous series of nationally representative panels, each from 2.5 to 4 years. As of 1996, each panel is 4 years. Nationally representative, cross-sectional. Nationally representative panel lasting about 16 months. Nationally representative overlapping panels, each lasting about 2.5 years. Nationally representative panel of 51- to 61-year-olds/ spouses as of 1992. Panel ongoing. About 54,000 eligible households/ 100,000 people for the 1997 CPS March supplement. Hispanics are oversampled. Typically about 14,000 to 20,000 households. For the 1996 panel, about 36,700 households/ 77,000 people. About 56,000 households in 1988, and about 57,000 households in 1994. About 14,000 households/ 35,000 people. About 10,500 households, or 25,000 people. Blacks and Hispanics are oversampled. Overall sample increased every 5 years. Over about 7,600 households/ 12,600 people. Hispanics, blacks, and Florida residents are oversampled. About 90% of the individuals. About 74% for the 1993 panel. About 95% in 1988 and about 94% in 1994. About 72%. About 78% for round 1. About 82% for wave I and 93% of wave I members for wave II. As a result of its breadth, currency, and precision, we relied on the March 1997 CPS supplement for our analysis of the demographic and insurance status of the near elderly. The March supplement is based on a sample of about 54,000 households with approximately 100,000 individuals. As shown in table II.1, the CPS is one of the largest surveys and allows comparisons of the insurance and demographic characteristics of 55- to 64-year-olds and younger age groups. It also allowed us to make observations about two subgroups—those aged 55 to 61 and 62 to 64. It is among the surveys with the most current data and addresses health status and income, categories not covered by some of the other surveys. The CPS is based on a sample designed to be nationally representative of the civilian noninstitutional population of the United States. As a result, any estimates about that population are subject to sampling errors. To minimize the chances of citing differences that could be attributable to sampling errors, we highlight only those differences that are statistically significant at the 0.05 level. In addition to sampling errors, another source of variability that affects the interpretation and quality of survey data is the coverage and response rates. The coverage ratio is a measure of the extent to which persons are represented in the sample according to demographic characteristics such as age or race. For the age groups reported in our study, these ratios ranged from 0.855 to 0.998. The response rate for the CPS is an overall measure of the extent to which houses and persons selected for the sample are actually represented in the sample of respondents. For the March 1997 CPS, the response rate was about 90 percent. This response rate is reasonable and somewhat higher than for most of the other surveys. A major difference between the CPS March supplement and surveys such as SIPP and HRS is that the latter are designed to follow a group of respondents (often referred to as a “panel” of individuals) over a period of time—2-1/2 to 4 years for the SIPP and 10 to 12 years for HRS—while the CPS is primarily designed to be cross-sectional, largely focusing on the 12 months preceding the interview. As a result, we did not use the CPS to directly measure how the health, income, and insurance status of individuals or groups change over time. To better understand the estimates we reported in chapter 2, it is important to be aware of how some of the CPS questions are worded and the responses categorized. The following explains four categories of questions. Insurance Status. The CPS questions that we used to determine insurance status ask whether respondents were covered through various sources of insurance (for example, employer-based, individual, and Medicare). However, they do not ask for the length of coverage or whether the individual was covered through these sources at the time of the interview. Thus, the results of these questions overestimate the size of the insured population because respondents are considered insured for the entire year if they were insured at all during the preceding 12-month period—regardless of their insurance status at the time of the interview or the length of time they were insured. Conversely, the wording of these questions produces an underestimate of the uninsured population because, regardless of their insurance status at the time of the interview, a respondent must have been uninsured for the entire year to be categorized as uninsured. Some people may receive coverage from several sources. To avoid double counting, we prioritized the source of coverage reported by the CPS. For our analysis, employment-based coverage was considered primary to other sources of coverage, and respondents were classified as having employment-based coverage even if they also had other types of coverage. The other types of health insurance coverage were prioritized in the following order: Medicare, Medicaid, military/veterans, and individual insurance. Also, with respect to coverage through the individual insurance market, the CPS questionnaire does not distinguish between comprehensive and more limited policies that are available. Employment Status. The CPS questions that we used for employment status are similar to those on insurance status. Specifically, respondents are considered employed if they worked at all in the year, and not employed only if they did not work at all during the past 12 months. As a result, these questions overestimate the employed population and underestimate the number who did not work. Health Status. The CPS asks respondents to categorize their health as excellent, very good, good, fair, or poor. The question is worded in the present tense and implies an answer relating to the respondent’s health at the time of the interview. In our analysis, however, we correlated health status with other characteristics such as employment and insurance status, which, as noted, had a different temporal context. In general, poor health equated to a weakened workforce attachment and to an increased likelihood of having public coverage or being uninsured. To the extent a respondent’s health status at the time of the interview differed from that during the preceding 12 months, the relationship between the two variables is weakened. Consequently, when we report differences in employment or insurance status relative to health status, we are probably underestimating the extent to which the latter has affected these other characteristics. Income Status. The gross income data we report overstates the amount of disposable income available to nonelderly Americans because it does not take into account the taxes they must pay. On the other hand, income alone is an incomplete measure of wealth and the ability of individuals to afford individual market premiums or employer-imposed cost sharing. Although the inclusion of assets such as homes, investments, and savings would provide a more comprehensive measure of affordability, such data are not available through the CPS. Moreover, income comparisons between different age groups are complicated by differences in family size and financial obligations. For example, a married couple in their thirties with several children and a mortgage may earn more than a near-elderly couple whose children are grown and who own their home, but their financial obligations are clearly not comparable. And the younger couple may have fewer assets, other than current income, on which to draw. Information on the extent to which employers offer health coverage to retirees as well as the conditions under which coverage is made available is captured in private surveys conducted by benefit consultants. The Foster Higgins and KPMG Peat Marwick employer surveys are based on random samples with results that can be generalized to a larger population of employers. Neither survey reports information on the precision of its estimates. Other employer surveys we examined are based on a sample of clients, which statistically limits the results to that client base. In general, we report data from the Foster Higgins and KPMG Peat Marwick surveys. However, these two surveys did not always capture important changes in the conditions under which retiree health benefits are made available. Thus, we occasionally include information from client-based surveys but note that the latter must be used cautiously since they are not generalizable. In addition to proprietary surveys, some information on employer-based retiree health benefits is also available from a biennial survey conducted by the Bureau of Labor Statistics (BLS) and from special supplements to the CPS. Although the BLS survey is based on a sample that can be generalized to a larger population, the sample focuses on establishments rather than unique firms. Thus, different branches or offices of the same firm could be included in the sample. Moreover, rather than reporting the number of establishments that offer retiree coverage, the results are presented in terms of how many workers have access to retiree health benefits. In contrast to the firms and establishments surveyed by benefit consultants or BLS, the unit of analysis for the CPS supplements is individuals. These individuals were asked whether they continued employer-based coverage at retirement or later during retirement and to identify the reason they discontinued coverage. In 1994, “retiree coverage not offered by employer” was added to the list of reasons, but it was not used in the 1988 questionnaire. Table II.2 compares selected characteristics across three employer surveys. Characteristics of the August and September CPS supplements are included in table II.1. Table II.2: Characteristics of Employer Surveys Used in Our Analysis 1986, although results before 1993 are not comparable to later surveys, which were based on random samples. 1991. 1980. Data on trends in retiree health coverage were reported for 1991-93, 1995, and 1997. Annually, with small establishments surveyed in 1 year and medium and large establishments surveyed in the next year. 1997. 1995. Stratified random sample of public and private employers with 10 or more workers. Stratified random sample of public and private employers with 200 or more workers. Two-stage probability sample of establishments and occupations. Establishments with 100 employees or more are selected for the survey of medium and large private establishements. Establishments with fewer than 100 employees and state and local governments are selected for the survey of small establishments. 3,676 in 1993; 3,156 in 1997. About 1,800 in 1993; 2,500 in 1997. 3,447 medium/large establishments in 1993 and 3,092 small establiments in 1994. 78 percent in 1993; 50 percent in 1997. 55 percent in 1993; 60 percent in 1997. About 67 percent in 1993; about 70 percent in 1994. We obtained information on the prevalence of health conditions, and health care expenditures and use from surveys conducted by the National Center for Health Statistics (NCHS) and Agency for Health Care Policy and Research (AHCPR). Specifically, we used the 1994 National Health Interview Survey (NHIS) for the prevalence of health 1994 National Hospital Discharge Survey (NHDS) for the number of hospital discharges and days of care, 1996 National Hospital Ambulatory Medical Care Survey (NHAMCS) for the number of visits to emergency rooms and outpatient departments, 1996 National Ambulatory Medical Care Survey (NAMCS) for the number of physician office visits, and 1987 NMES for health care expenditures. The NMES data we reported were “aged” by AHCPR to represent 1998 dollars. Table II.3 compares selected characteristics for the NHIS, NHDS, NHAMCS, and NAMCS. Information on the NMES was reported in table II.1. Table II.3: Selected Characteristics of Surveys on Health Conditions, Expenditures, and Utilization A national multistage probability design with continuous weekly samples so that each is representative of the target population and additive over time. A national multistage probability design based on primary sampling units (PSU) used in the NHIS, hospitals within the PSUs, and a systematic random sample of inpatient records. Also, all hospitals with 1,000 beds or more or 40,000 discharges or more annually are included in the sample. A national multistage probability design based on PSUs, hospitals within these PSUs, emergency rooms and clinics within outpatient departments, and patient visits. A national multistage probability sample based on PSUs, physician practices in those PSUs, and patient visits. 49,000 households with 127,000 people. Blacks are oversampled. 512 hospitals. 486 hospitals, of which 438 had an emergency room or outpatient department. 3,000 physicians, of which 2,142 were eligible. 93%, representing 277,000 discharge records from 478 respondents. 95%, representing 21,092 emergency room records and 29,806 outpatient department records. 70%, representing 29,805 patient record forms. Insurance status—numbers in millions (percent) 0.34 (2.4%) 0.23 (12.3%) 0.85 (38.5%) 0.72 (24.0%) $10,000 - $19,999 1.20 (8.5%) 0.32 (17.5%) 0.67 (30.3%) 0.66 (22.3%) $20,000 - $49,999 5.31 (37.8%) 0.76 (41.3%) 0.46 (21.0%) 0.97 (32.5%) $50,000 - $74,999 3.32 (23.6%) 0.27 (14.6%) 0.17 (7.5%) 0.36 (12.2%) 3.87 (27.6%) 0.27 (14.4%) 0.06 (2.8%) 0.27 (9.0%) 7.06 (50.3%) 0.76 (41.3%) 0.95 (43.2%) 1.26 (42.5%) 6.97 (49.7%) 1.08 (58.7%) 1.25 (56.8%) 1.71 (57.5%) 0.69 (4.9%) 0.18 (9.6%) 0.31 (14.3%) 0.26 (8.8%) 0.91 (6.5%) 0.17 (9.0%) 0.30 (13.6%) 0.30 (10.0%) 11.75 (83.7%) 1.61 (87.0%) 1.35 (61.2%) 1.84 (62.0%) 1.17 (8.3%) 0.89 (4.8%) 0.43 (19.4%) 0.40 (13.6%) 0.69 (4.9%) 0.82 (4.4%) 0.31 (14.0%) 0.51 (17.3%) 0.42 (3.0%) 0.68 (3.7%) 0.12 (5.3%) 0.21 (7.1%) 7.44 (53.0%) 0.68 (36.8%) 0.08 (3.6%) 0.83 (27.8%) 3.08 (22.0%) 0.55 (29.8%) 0.21 (9.3%) 0.76 (25.7%) 3.51 (25.0%) 0.62 (33.4%) 1.92 (87.1%) 1.38 (46.5%) 7.59 (54.1%) 0.91 (49.4%) 0.26 (11.6%) 1.08 (36.4%) 4.36 (31.1%) 0.65 (35.2%) 0.43 (19.5%) 1.02 (34.4%) 2.08 (14.8%) 0.28 (15.4%) 1.52 (68.9%) 0.87 (29.2%) Table III.2 displays the characteristics of three subgroups of the near elderly: (1) 55- to 61-year-olds, (2) 62- to 64-year-olds, and (3) 62- to 64-year-olds who elected Social Security benefits at a reduced annuity. The estimated numbers of individuals in these three subgroups are 15.7 million, 5.8 million, and 3.0 million, respectively. As mentioned in chapter 2, just over one-half of those eligible elected Social Security before age 65. Number with characteristic (percent) 62- to 64-year-olds with reduced Social Security annuity 1.51 (9.6%) 0.68 (11.8%) 0.33 (11.3%) $10,000 - $19,999 1.95 (12.5%) 0.96 (16.5%) 0.64 (21.7%) $20,000 - $49,999 5.41 (34.6%) 2.28 (39.2%) 1.32 (44.5%) $50,000 - $74,999 3.18 (20.3%) 0.99 (17.1%) 0.43 (14.3%) 3.62 (23.1%) 0.89 (15.4%) 0.24 (8.2%) 7.53 (48.1%) 2.73 (47.1%) 1.31 (44.4%) 8.14 (51.9%) 3.07 (52.9%) 1.65 (55.6%) 0.89 (5.7%) 0.59 (10.2%) 0.36 (12.1%) 1.32 (8.4%) 0.37 (6.4%) 0.18 (6.0%) 12.34 (78.7%) 4.57 (78.8%) 2.43 (82.0%) 1.56 (9.9%) 0.56 (9.6%) 0.28 (9.5%) 1.2 (7.6%) 0.42 (7.2%) 0.18 (6.2%) 0.57 (3.7%) 0.25 (4.3%) 0.07 (2.3%) 7.61 (48.6%) 1.53 (26.4%) 0.16 (5.3%) 3.43 (21.9%) 1.29 (22.2%) 0.74 (24.9%) 4.63 (29.6%) 2.98 (51.4%) 2.07 (69.8%) 10.57 (67.4%) 3.46 (59.6%) 1.54 (52.1%) 1.26 (8.0%) 0.59 (10.1%) 0.36 (12.0%) 0.76 (4.9%) 0.50 (8.5%) 0.43 (14.7%) 0.72 (4.6%) 0.23 (4.0%) 0.11 (3.8%) 0.29 (1.9%) 0.12 (2.1%) 0.07 (2.4%) 2.07 (13.2%) 0.90 (15.5%) 0.44 (15.0%) 7.65 (48.8%) 2.37 (40.8%) 1.05 (35.3%) 4.65 (29.6%) 1.95 (33.5%) 0.98 (33.1%) 3.37 (21.5%) 1.49 (25.6%) 0.94 (31.6%) Analysts have attempted to show that access to health benefits is an important factor influencing the retirement decision. It is not difficult to imagine an individual in poor health continuing to work to maintain access to employer-based benefits that are not available to retirees. Similarly, it appears that the near elderly would be averse to leaving the workforce without health benefits. But does the availability of coverage actually encourage retirement earlier than it might otherwise occur? Despite the limitations of most studies, they all agree that there is a positive correlation between access to health benefits and the retirement decision. However, they disagree, often substantially, on the extent of the impact, suggesting a need for additional empirical research. First, a 1993 study by Hurd and McGarry found that the availability of retiree health insurance at least partly funded by the employer reduced the probability that an individual would be working full time after age 62 by between 18 and 24 percent. In addition, a 1994 study by Karoly and Rogowski found that the availability of postretirement health benefits would increase the probability of men retiring early by 50 percent.However, their study may overestimate the effect because the availability of retiree health insurance was imputed, and the estimated retirement impact of health benefits may be highly correlated with retirement decisions for reasons other than health insurance, such as pension plan provisions. Third, using a life-cycle model of retirement that incorporates the value of retiree health benefits and also includes information on pension accruals, Gustman and Steinmeier found that employer-based coverage lowers male retirement age by about 1.3 months. The authors acknowledged that their methodology may tend to underestimate the effect of health benefits on retirement. Furthermore, a 1994 study by Madrian reported that individuals with access to health insurance retired between 5 and 16 months earlier than those lacking coverage and that the probability of retiring before age 65 was between 7 and 15 percentage points higher for individuals with retiree health insurance. Shortcomings of the study included (1) an inconclusive attempt to control for participation in a pension plan and (2) the fact that the results were based on the recollections of individuals who had been retired as long as 15 years and had to recall their pension and health insurance status at the time of retirement. Finally, a 1993 study by Gruber and Madrian focused on the early retirement impact of state and federal COBRA coverage. They found that continuation mandates have an effect on retirement among men aged 55 to 64. Specifically, 1 year of coverage raised the probability of being retired by 1.1 percentage point. However, they also reported that this additional year of coverage raises the probability of being insured by 6 percentage points, suggesting that many of these individuals would have retired in the absence of such coverage. Finally, contrary to basic intuition, the effects are not necessarily the strongest at older ages but decline with age. Using data from the March 1997 CPS and 1995 and 1996 information on insurance premiums, we estimated the percentage of median income that a 55- to 64-year-old would have to commit to health insurance under a number of possible scenarios, including purchasing coverage through the individual market in a community-rated state (Vermont) as well as one that had no restrictions on the premiums that could be charged (Colorado) using 1996 rates for a commonly purchased health insurance product and cost sharing under employer-based coverage using 1995 Peat Marwick estimates of the lowest, highest, and average retiree contribution. While no official affordability standard exists, research suggests that older Americans commit a much higher percentage of their income to health insurance than do younger age groups. Congressional Budget Office calculations based on data from the BLS Consumer Expenditure Survey indicate that between 1984 and 1994, spending by elderly Americans aged 65 and older on health care ranged from 10.2 percent to 12.9 percent of household income. In 1994, elderly Americans spent 11.2 percent of household income, about three times as much as younger age groups. These estimates include costs other than premiums or employer-imposed cost sharing—for example, copayments, deductibles, and expenditures for medical services not covered by insurance. Table V.1 compares the cost of health insurance purchased in the individual market and employer-imposed cost sharing for early retirees with the median income of the near elderly in 1996. As demonstrated by table V.1, the near elderly’s share of employer-subsidized coverage is generally lower than for coverage purchased through the individual market. For example, on average, employer-based family coverage for retirees at $2,340 annually represents 4.7 percent of median family income. In contrast, costs in the individual market can be significantly higher—in part, because they lack an employer subsidy. In Colorado, the annual premium for a commonly purchased individual insurance product in 1996 was about $2,500 for single coverage and $5,000 for a couple—representing about 12 percent and 10 percent, respectively, of median income for 55- to 64-year-olds. While less expensive than the Colorado example, premiums for health insurance through the individual market in Vermont—a community-rated state—would represent 9.9 percent of median income for single coverage and 8.4 percent of median income for a couple. For more than one-half of the near elderly, these individual market costs typically exceed average health care spending for Americans under age 65—in some cases significantly. In April 1998, the Center for Studying Health System Change reported that older adults who purchased individual coverage typically spent a considerably higher proportion of their income on premiums than other adult age groups—about 9 percent for the 60- to 64-year-old group. Since 1986, COBRA eligibility has been expanded on a number of occasions: COBRA was made available to retirees whose former employer had declared bankruptcy (P.L. 99-509). Coverage was extended from 18 to 29 months for certain disabled COBRA enrollees (P.L. 101-239). A 1996 change clarified that the dependents of a disabled qualified beneficiary are also eligible for the additional 11 months of COBRA coverage and provided that the qualifying event of disability applies in the case of a qualified beneficiary whose disability is determined under the Social Security Act to exist during the first 60 days of COBRA coverage (P.L. 104-191). A 1990 change permitted states to use Medicaid funds to pay for COBRA premiums of certain low-wage beneficiaries (who had worked for an employer with 75 or more employees) whose income does not exceed 100 percent of the federal poverty level and whose resources are at or below the Supplemental Security Income level. The state must determine that the anticipated Medicaid savings from COBRA would exceed the COBRA premium costs (P.L. 101-508). COBRA continuation requirements were extended to the Federal Deposit Insurance Corporation (P.L. 102-242). Reservists and their dependents who would otherwise lose employer-based health benefits as a result of taking a leave of absence to serve in the armed forces were made eligible for 18 months of COBRA coverage (P.L. 103-353). The Health Insurance Portability and Accountability Act of 1997 (HIPAA) (P.L. 104-191) requires that any individual who exhausts COBRA continuation coverage is guaranteed the right to purchase insurance in the individual market without any preexisting exclusions or waiting periods. “Blues plan” acts as insurer of last resort (continued) Idaho: Premium rates may not vary by more than 25 percent of the applicable index rate for age and gender only. The Director of Insurance may approve additional case characteristics. Iowa: Premium rates may not vary by more than 100 percent from the applicable index rate for demographic characteristics approved by the Commissioner of Insurance. The legislation does not specify these characteristics, but an insurance department official said they may include age, gender, and geographic location. Kentucky: Premium rates may not vary by more than a 5 to 1 ratio for all case characteristics. Allowable case characteristics (and maximum allowable variation, if specified) are age (300 percent), gender (50 percent), occupation or industry (15 percent), geography, family composition, benefit plan design, cost-containment provisions, whether or not the product is offered through an alliance, and discounts (up to 10 percent) for healthy lifestyles. Louisiana: Adjusted community rating is required, with variation of +/-10 percent currently allowed for health status and unlimited variation allowed for specified demographic characteristics and other factors approved by the Department of Insurance. Maine: Adjusted community rating is required, with variation allowed of no more than +/-20 percent of the community rate for age, tobacco use, occupation, industry, or geographic area. Massachusetts: Adjusted community rating is required for carriers’ guaranteed-issue health plans with maximum allowable variation ratio of 1.5 to 1 for geographic area and 2 to 1 for age. Effective December 1, 1999, the maximum allowable variation ratio for age will be 1.5 to 1. Minnesota: Premium rates may vary from the index rate +/-25 percent for health status, claims experience, and occupation, and +/-50 percent for age. Premium rates may also vary by up to 20 percent for three geographic areas. New Hampshire: Adjusted community rating is required with a maximum variation ratio of 3 to 1 allowed for age only. New Jersey: Community rating is required. New Mexico: Until July 1, 1998, premium rates may vary for age, gender (no more than 20 percent), geographic area of the place of employment, tobacco use, and family composition (by no more than 250 percent). Thereafter, every carrier must charge the same premium for the same coverage to each New Mexico resident, regardless of demographic characteristics or health status. The only allowable rating factor will be age—whether the person is over or under the age of 19. New York: Pure community rating within specified geographic regions. North Dakota: Premium rates charged to individuals within a class for the same or similar coverage may not vary by a ratio of more than 5 to 1 for differences in age, industry, geography, family composition, healthy lifestyles, and benefit variations. Ohio: Premiums charged to individuals may not exceed 2.5 times the highest rate charged to any other individuals with similar case characteristics. Oregon: Each carrier must file a geographic average rate for its individual health benefit plans. Premium rates may not vary from the individual geographic average rate, except for benefit design, family composition, and age. Legislation does not limit this variation, but indicates that age adjustments must be applied uniformly. South Carolina: Premium rates charged to individuals with similar demographic characteristics may not vary by more than 30 percent. The legislation specifically states that age, gender, area, industry, tobacco use, and occupational or avocational factors may be used to set premium rates, but does not prohibit the use of additional characteristics. The only exception is durational rating, which is explicitly prohibited. South Dakota: Carriers may establish up to three classes of individual business. Within a given rating period, the index rate for any class of business may not exceed the index rate for any other class of individual business by more than 20 percent. Within a class of business, the premium rates charged to individuals with similar case characteristics for the same or similar coverage may not vary from the index rate by more than 30 percent. A carrier may not use characteristics other than age, gender, lifestyle, family composition, geographic area, health status, height, and weight without the prior approval of the Director of Insurance. The maximum rating differential based solely on age may not exceed a ratio of 5 to 1. Adjustments based on these characteristics may result in premium rates that vary more than the set parameters noted. Utah: A variation of +/-25 percent is allowed for health status or duration of coverage. Carriers may also vary premiums because of differences in age, gender, family composition, and geographic area by actuarially reasonable rates, as defined in National Association of Insurance Carriers guidelines. Premiums may also be rated-up 15 percent for industry. The index rates carriers use for their individual business may be lower than or equal to, but not any higher than, the index rates they use for their small-employer business. Vermont: Adjusted community rating of indemnity plans is required, with maximum allowable variation of +/-20 percent for limited demographic characteristics. HMOs operating in the state must use pure community rating and thus are not allowed to vary rates. Washington: Adjusted community rating is required, with variation allowed for geographic area, family size, age, and wellness activities. Permitted rates for any age group cannot exceed 400 percent of the lowest rate for all age groups on January 1, 1997, and 375 percent on January 1, 2000, and thereafter. The discount for wellness activities cannot exceed 20 percent. West Virginia: Premium rates charged to individuals with similar demographic characteristics may not vary by more than 30 percent. The legislation specifically states that age, gender, geographic area, industry, tobacco use, and occupational or avocational factors may be used to set premium rates,but does not prohibit the use of additional characteristics. The only exception is durational rating, which is explicitly prohibited. Jonathan Ratner, Project Director, (202) 512-7107 Walter Ochinko, Senior Health Policy Analyst, (202) 512-7157 Susan T. Anthony, Senior Evaluator Mark Vinkenes, Senior Social Science Analyst Paula Bonin, Senior Evaluator (Computer Specialist) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the ability of Americans aged 55 to 64 to obtain health benefits through the private market--either employer-based or individually purchased, focusing on the near elderly's: (1) health, employment, income, and health insurance status; (2) ability to obtain employer-based health insurance if they retire before they are eligible for Medicare; and (3) use of costs associated with purchasing coverage through the individual market or employer-based continuation insurance. GAO noted that: (1) though the near elderly access health insurance differently than other segments of the under-65 population, their overall insurance picture is no worse and is better than that of some younger age groups; (2) since fewer employers are offering health coverage as a benefit to future retirees, the proportion of near elderly with access to affordable health insurance could decline; (3) the resulting increase in uninsured near elderly would be exacerbated by demographic trends, since 55- to 64-year-olds represent one of the fastest growing segments of the U.S population; (4) the current insurance status of the near elderly is largely due to: (a) the fact that many current retirees still have access to employer-based health benefits; (b) the willingness of near-elderly Americans to devote a significant portion of their income to health insurance purchased through the individual market; and (c) the availability of public programs to disabled 55- to 64-year-olds; (5) the individual market and Medicare and Medicaid for the disabled often mitigate declining access to employer-based coverage for near-elderly Americans and may prevent a larger portion of this age group from becoming uninsured; (6) the steady decline in the proportion of large employers who offer health benefits to early retirees, however, clouds the outlook for future retirees; (7) in the absence of countervailing trends, it is even less likely that future 55- to 64-year-olds will be offered health insurance as a retirement benefit, and those who are will bear an increased share of the cost; (8) although trends in employers' required retiree cost sharing are more difficult to decipher than the decisions of firms not to offer retiree health benefits, the effects may be just as troublesome for future retirees; (9) moreover, access and affordability problems may prevent future early retirees who lose employer-based health benefits from obtaining comprehensive private insurance; (10) furthermore, significant variation exists among the states that limit premiums: a few require insurers to community-rate the coverage they sell--that is, all those covered pay the same premium--while other states allow insurers to vary premiums up to 300 percent; and (11) the Consolidated Omnibus Budget Reconciliation Act is only available to retirees whose employers offer health benefits to active workers, and coverage is only temporary, ranging from 18 to 36 months. |
SSA provides assistance to people who qualify as disabled under two programs: (1) Disability Insurance (DI), which provides benefits to people who have worked and paid Social Security payroll taxes, and (2) Supplemental Security Income (SSI), which is an assistance program for people with limited income and resources who are blind, aged, or disabled. Currently, the disability determination process starts when a person first applies for DI or SSI disability benefits. To apply for benefits, he or she calls the national toll-free telephone number and is referred to a local SSA field office or visits or calls one of 1,300 local field offices. Claims representatives in field offices assist with the completion of claims, obtain detailed medical and vocational history, and screen nonmedical eligibility factors. Field office staff forward the claim to a DDS. At the DDS, medical evidence is developed by a disability examiner and a medical consultant; a final determination is made as to the existence of a medically determinable disability. The DDSs then send allowed claims to SSA field offices or SSA processing centers for payment and storage. Files for denied cases are retained in field offices, pending possible appeal. According to SSA, in part because of the numerous handoffs among staff involved in processing a disability claim, a claimant can wait, on average, between 78 and 94 days from the time of filing with SSA until receiving an initial claim decision notice—when in fact only 13 hours is actually spent working on the claim. In 1994, SSA released its redesign plan for receiving and deciding disability claims. The plan aims to improve the current process, which is labor intensive and slow, so as to increase claimant and staff satisfaction. To develop the plan, SSA created a Disability Process Reengineering Team, charged with producing a new process that is customer-focused, operationally feasible, and an improvement over the current process. A Disability Process Redesign Team (DPRT) was later formed to implement the Reengineering Team’s plan. In developing its redesign plan, Reengineering Team members solicited views from customer focus groups, frontline staff, managers and executives, and parties outside of SSA. The Reengineering Team found that claimants were frustrated with the fragmented nature of the current process and wanted more personalized service. In addition, some SSA staff were frustrated because they were not trained to answer claimants’ questions about medical disability decisions or about the status of cases while in DDS offices. To address these concerns, SSA created the DCM position as the cornerstone of its redesign plan. Under SSA’s redesign plan, the DCM—a single decisionmaker located at either an SSA or a DDS office—would be solely responsible for processing the initial disability claim and making the decision, thereby assuming functions currently performed by at least three federal and state workers. The DCM would conduct personal interviews, which could be face-to-face, by telephone, or by video conference; develop evidentiary records; and determine medical and nonmedical eligibility. Specifically, the DCM would gather and store claim information; develop both medical and nonmedical evidence; share necessary facts in a claim with medical consultants and specialists in nonmedical or technical issues; analyze evidence; prepare well-reasoned decisions on both medical and nonmedical issues; and produce clear, understandable notices to convey information to claimants. In addition, the DCM would authorize payment of the claim. Although DCMs would still have access to medical and technical support personnel, they alone would make the final decision on both medical and nonmedical aspects of a disability claim. A medical consultant’s signature would no longer be required on decisions. The DCM would also serve as a single, personal point of contact for claimants. When filing claims, claimants could first speak in person with a DCM to obtain information about the process. In addition, a claimant would be entitled to contact the DCM throughout the process and meet personally with the DCM to provide additional evidence if the DCM expected to deny a claim. See appendix II for a comparison of the tasks currently assigned to claims representatives and disability examiners with those expected of the DCM. Recognizing the complexity of the DCM position responsibilities, the redesign plan calls for implementing several new support features that SSA considers critical to the DCM position: (1) SSA plans to develop a simplified decision methodology that would provide a less complex, more structured approach for DCMs to use when deciding claims. (2) New hardware and software would automate most aspects of the process and allow SSA to move from a process that depends on paper folders to one that depends on electronic records. These records would be easy to transmit between headquarters, field offices, and state DDSs. (3) In order to address the perception that different policy standards are applied at different levels of disability decision-making, SSA intends to develop a process that generates similar decisions for similar cases at all stages of the disability process through consistent application of laws, regulations, and rulings. SSA refers to this feature as process unification. Without these new features, SSA managers do not expect that DCMs would be able to handle the broad range of activities that the position requires. However, as of July 1996, none of these support features were available. During the next few years, SSA expects to test the DCM position and several DCM-related initiatives. Some of the related initiatives, which SSA believes will immediately improve customer service, are being tested because SSA initially thought that the DCM position could not be immediately implemented. Other tests, which had been planned prior to redesign, are designed to provide information on various functions now incorporated into the DCM position. These tests are described below. Appendix III provides information on their status. SSA’s initial 1994 redesign plan called for testing and implementing alternative ways of serving claimants, based on teams of claims representatives and disability examiners. Currently, a disability claim is handled primarily by two staff members (the claims representative and the disability examiner), each working independently of the other, with minimal coordination. As part of the redesign plan, SSA expects to team its claims representatives and DDS disability examiners so they can process claims in a coordinated manner. SSA also expects that this team environment would allow claims representatives and disability examiners to share skills and enhance communication, thus better preparing them for the transition to the DCM position. Following this initial teaming of claims representatives and disability examiners, SSA plans to build on teaming by implementing the Early Decision List and sequential interviewing initiatives. SSA envisions that the Early Decision List and sequential interviewing would provide claims representatives and disability examiners with opportunities to (1) expedite the processing of disability claims by streamlining the interview process and (2) expand the claims representatives’ skills and experience in the medical area and that of the disability examiners in the nonmedical area. The Early Decision List identifies severe disabilities that can be adjudicated by claims representatives with minimal training and documentation. The Early Decision List will allow a claims representative to approve certain types of claims. After approving a claim, the claims representative would forward the case to a medical consultant for final approval. Currently, only the disability examiner and the medical consultant approve these claims. SSA expects that initially, about 100,000 claims per year might be approved under the Early Decision List. Eventually, the number of Early Decision List cases will expand as claims representatives’ skills and knowledge base increase. This expansion will result from (1) phasing in additional categories of disabilities and (2) the option for claims representatives to issue denials. The sequential interviewing initiative is designed to provide disability examiners with preliminary interviewing experience for certain categories of disability claims. Additional categories will be phased in over time as the examiners’ experience increases. Under sequential interviewing, after the claims representative completes the nonmedical portion of the claim, he or she will turn the claimant over to the disability examiner, who will complete the medical portion of the application. The disability examiner will either talk with the claimant by telephone before he or she leaves the field office or talk by telephone at a later date. According to SSA’s plan, the Early Decision List and sequential interviewing are modeled on existing teaming initiatives in field offices and state DDSs. For example, some offices have already experimented with sequential interviewing; in other offices, SSA claims representatives already assist DDSs by making medical determinations for some categories of severe disabilities. Preliminary results from these local initiatives indicate that they can improve customer service, work flow, and job satisfaction. For example, one field office that used sequential interviewing processed initial claims in 46 days, well below the current average of between 78 and 94 days. Customer surveys indicate that claimants served in these efforts were pleased with the sequential interviewing. In addition, claims representatives and disability examiners participating in these initiatives were satisfied with the team tests, they said. Currently, SSA expects to conduct formal testing and evaluation of the Early Decision List, but it will rely on states to test sequential interviewing. SSA also expects to make available its Office of Workforce Analysis and Office of Program and Integrity Reviews to provide test assistance to states. According to the DPRT director, SSA made this decision because (1) of resource constraints and (2) sequential interviewing is viewed as only a temporary measure, which will lead to the DCM position. However, the director acknowledged that formal testing of sequential interviewing would be necessary to allow for a comparison of this initiative with the proposed DCM position. In addition to sequential interviewing and Early Decision List initiatives, SSA expects to test modifications to the disability determination process at model sites in federal offices and state DDSs. One model site test—the single medical decisionmaker—exemplifies the concept of the disability examiner making eligibility decisions alone, except in cases for which medical consultant involvement is required by statute. SSA considers this test useful because it analyzes the aspects of the redesign plan that have DCMs making eligibility decisions without necessarily soliciting medical consultants’ input for all cases. In this test, a disability examiner will be authorized to make medical eligibility decisions without obtaining a medical consultant’s signature, on the SSA form, certifying the determination. In other model site tests, scheduled for completion in late 1998, SSA will expand the single medical decisionmaker test to evaluate other aspects of the disability process. In the expanded test, SSA will consider the effect of allowing claimants to have a personal predecision interview with the decisionmaker, in order to provide additional evidence if a denial is imminent. This is an opportunity not available under the existing system. As of June 1996, SSA was testing the single medical decisionmaker at DDSs in eight states and was developing the expanded test for implementation in seven states and two SSA offices. In its original redesign plan, SSA intended to test the DCM position only after testing was under way on the Early Decision List, sequential interviewing, and initiatives being explored at the model sites. SSA also intended that critical support features—including a structured approach for deciding claims, new hardware and software, and a process that ensures similar decisions for similar cases at all stages of the disability process—would be in place before the DCM could be implemented. However, in October 1995, SSA decided to initiate DCM testing in 1996, even though SSA had not yet (1) implemented these other initiatives or (2) developed any of the support features that had been included in the redesign plan as critical to the position. According to the DPRT director, SSA management accelerated DCM testing to address several factors that might impede the overall redesign plan. For example, the DPRT director became concerned that delaying DCM testing until critical support features were in place would slow the momentum for the redesign plan, particularly because delays were already occurring in SSA’s original schedule to implement these features. SSA also wanted to gain endorsement from its federal employee union, which originally was concerned about the DCM position. The DPRT director further cited state DDS directors’ concerns—about providing disability examiners with little opportunity to gain nonmedical case development experience—as a factor influencing his decision to begin testing the DCM position. According to the DPRT director, the tests will provide states with additional time to become accustomed to the DCM concept and with the opportunity to address concerns about the position. However, state DDS directors’ representatives said, DPRT misunderstood their concerns. DDS directors oppose SSA’s plan to accelerate implementation of the DCM position without the necessary critical support features and are concerned that SSA is beginning to give a workload to federal employees that is currently states’ responsibility. According to the president of the American Federation of Government Employees, Local 1923, the union would have opposed the DCM position if SSA attempted to implement it as a grade 11. Under a memorandum of understanding between the union and SSA, people who are assigned to DCM positions will receive temporary promotions to grade 12, one grade higher than the journeyman level for the claims representative position. According to the Deputy Commissioner for Human Resources, if SSA decides to make the DCM position permanent, an evaluation will be required to determine the appropriate salary level for the job. To develop parameters for conducting and evaluating the DCM test, SSA assembled a work group consisting of representatives from SSA and DDS management, claims representatives and disability examiners, and federal and state union members. Throughout redesign, SSA has relied on such work groups to formulate plans for the individual redesign components. In July 1996, the work group released its final proposal for testing the DCM position. Agreement to the proposal, developed by this work group, must be obtained from the states, unions, and SSA management. The work group’s report recommends that SSA (1) conduct the DCM test in three phases, over a 3-year period, and (2) decide, at the end of each phase, how to proceed with the balance of the test. During the first phase, scheduled to last for 18 months, SSA would test 150 federal and 150 state DCM positions. At the end of this phase, SSA would evaluate the results to determine whether it should continue, modify, or terminate the DCM test. For the second phase, if SSA decides to continue the test, it would then introduce an additional 200 federal and 200 state DCMs. After this phase, SSA would again evaluate the results to determine whether the agency should continue, modify, or terminate the test. If SSA decides to proceed with the third phase, it would then establish an additional 400 federal and 400 state DCMs. At the end of this third and final phase, SSA would conduct a comprehensive review of the entire DCM test in order to decide whether it should implement the DCM position permanently. However, the testing proposed by the DCM work group may leave untested an important feature of the position. During the initial test of the position, the claimant may not be given an opportunity to meet personally, face-to-face, with the DCM in a predecision interview. At this time, the claimant could provide additional evidence if the DCM expects to deny the claim. The predecision interview is a key factor of the DCM position, one that (1) could easily be tested without waiting for the critical support features and (2) many claims representatives and disability examiners would prefer not to do. Further, even though DDS representatives were work group participants, they did not support SSA’s proposal to test 1,500 DCM positions. At the conclusion of the DCM work group’s activities, the National Council of Disability Determination Directors presented a position paper to the DPRT director, stating that they would only agree to a test involving 60 state and 60 federal DCMs. Concerns have been raised about the DCM position since the DPRT first proposed it in 1994. These concerns include the complexity of the responsibilities, compromises to safety and internal controls, salary differential between federal and state employees, and structure of field operations. SSA and state DDS managers and staff, as well as employee groups and union representatives, are concerned about one person’s ability to master the complex responsibilities expected of a DCM. The DCM will combine major segments of two positions—claims representative and disability examiner—and will also include responsibilities now assigned to medical consultants. As SSA’s key staff providing public service, claims representatives carry out a wide range of complex tasks in the disability program. When processing an initial disability claim, a claims representative, through interviews, obtains and clarifies information from a disability claimant. The claims representative assists claimants with securing necessary additional evidence. Ultimately, the representative (1) determines whether claimants meet nonmedical requirements for benefits, using a series of administrative publications, including SSA’s Program Operations Manual System that interprets federal laws and regulations, (2) calculates benefit amounts, and (3) authorizes payments for allowed claims. Because of voluminous, detailed, and complicated program guidelines, some claims representatives specialize in processing claims for a specific SSA program, such as SSI. State DDS disability examiners also perform a wide range of complex tasks to determine whether a claimant’s disability meets SSA’s medical criteria for benefits eligibility. The disability examiner reviews claims forwarded by SSA field offices, obtaining additional medical records and vocational documentation on claimants as necessary. In making a medical determination, a disability examiner must establish the date of onset, duration, and level of severity of the disability; the prognosis for improvement; and the effect of the disability on a claimant’s ability to engage in gainful employment. As with guidelines for claims representatives, the complicated disability program guidelines lead some disability examiners to specialize in processing either child or adult claims. The complexity of disability examiners’ and claims representatives’ responsibilities is evidenced by the training required for the positions. Newly hired SSA claims representatives typically take 13 weeks of classroom training, followed by on-the-job training and mentoring. They reach journeyman level after a minimum of 2 years on the job. Similarly, the state DDS examiners go through a formal 2-year training program that includes classroom training and close individual supervision and guidance from unit supervisors; only then are examiners able to make medical eligibility determinations independently. According to some SSA and DDS managers and employees, the DCM position may stretch staff to the point that they cannot competently manage all the required tasks. For example, in one state that we visited, a local demonstration project has claims representatives approving disability decisions for some categories of claims—those for which the disability is easily determined. According to quality assurance staff reviewing these decisions, claims representatives are beginning to make errors on nonmedical portions of claims, possibly because these representatives are branching out into areas beyond their knowledge and experience. Although the DPRT director agreed that the responsibilities of the DCM position are complex, he stated that SSA designed it in response to claimants’ concerns that the existing process did not meet their needs. The new position is intended to (1) simplify the application process for claimants by allowing them personal contact with decisionmakers and (2) provide for more rapid decisions on claims. In addition, he stated that the DCM test will permit SSA to assess the feasibility of the DCM position. According to some federal and state staff and managers, the DCM position has the potential to compromise internal controls and safety of staff, issues that are currently not a problem because responsibilities are split between state and federal staff. These staff and managers are concerned about the safety of DCMs when they conduct face-to-face interviews with claimants. They are also concerned that the DCM position could compromise existing internal controls on the disability program. SSA’s redesign plan provides an opportunity for claimants to speak face-to-face with the DCMs who make decisions on their cases. Currently, claimants rarely meet face-to-face with disability examiners, who are primarily responsible for making the disability decision. As a matter of practice, claimants have personal interviews—by telephone or face-to-face—with field office claims representatives, who are frequently not trained to answer claimants’ questions about medical disability decisions. According to claims representatives and disability examiners, because of past incidents of claimant violence and the fact that some claimants have a history of mental illness, they are worried that claimants could become violent with DCMs who notify them, face-to-face, that their claims will be denied unless they can provide additional information as support. In addition, state staff said, some disability examiners chose their profession partly because it did not involve face-to-face interviews with claimants. Consequently, claims representatives and disability examiners may be reluctant to become DCMs because of such safety and job preference concerns. SSA’s plan to provide claimants an opportunity to meet face-to-face with decisionmakers differs from the approach used by many private companies that provide disability and workers compensation insurance. In these organizations, face-to-face interviews are generally used only under specific conditions, such as to investigate potential fraud or to help facilitate rehabilitation. According to officials from various private companies, direct personal contact with claimants generally is not economically viable because such meetings take a considerable amount of time. Further, these officials said, face-to-face meetings provide little additional information besides that which can be obtained by phone and mail and that they often create stress for staff who deny claimants’ benefits. Further, under the existing system, different groups of federal and state staff—including claims representatives, disability examiners, and claims authorizers—are responsible for making eligibility decisions, medical determinations, and claim payment authorizations. This division of responsibilities helps meet standards for internal controls in the federal government. These standards require that key duties and responsibilities in authorizing, processing, recording, and reviewing transactions be separated among staff. Such standards help to reduce the risk of error, waste, or wrongful acts because each staff member carries out his or her tasks for specific transactions; he or she is independent from the other staff members involved in processing the same transaction. Under the SSA redesign plan, however, the DCM—a single decisionmaker—would be responsible for making medical and nonmedical eligibility decisions and for authorizing benefit payments for each disability claim. By assigning all these responsibilities to one decisionmaker, SSA is increasing the potential for staff fraud, as other staff will not be processing the different parts of the claim. According to SSA, the DPRT has not yet developed a way to address this concern. However, according to the deputy associate commissioner for Office Financial Policy and Operations, SSA will address these issues as the redesign plan is implemented. State DDS representatives are concerned about SSA’s agreement with labor union officials to compensate federal DCMs, during the test, at a higher salary level than claims representatives. Their concern is that the agreement will exacerbate the salary differential between state and federal staff. According to Wisconsin DDS calculations, federal claims representatives now earn about $7,863 more on average in annual salary and benefits ($49,607) than state disability examiners ($41,744). However, disability examiners and claims representatives currently have different job responsibilities, which partially explains the salary differential. If SSA promotes grade 11 claims representatives to grade 12 DCMs, the differential between federal and state DCMs will ultimately widen to over $17,714. Federal DCMs will earn about $59,458 in salary and benefits, but state DCMs are not expected to receive a similar position upgrade. This differential would be more problematic than the current one because federal and state DCMs would be doing identical jobs. According to DDS directors, the salary differential between federal and state DCMs could cause serious morale problems among staff. According to the DPRT director, the salary differential between federal and state DCMs will continue to exist. However, the director said, states should use the DCM test as an opportunity to take position descriptions to their civil service boards to see if the positions can be upgraded. The director plans to work with state DDSs to facilitate this upgrade. However, according to the president of the National Council of Disability Determination Directors, many states will be unable to upgrade DDS employees because disability examiner positions are frequently classified with other unrelated positions and can not be upgraded without affecting states’ overall pay structures. The DCM position may require SSA and the state DDSs to restructure their field operations. Currently, SSA has about 1,300 field offices at which claimants can file their initial claims. The 54 DDSs have different types of field structures: 38 are centralized, with staff located in one office; the remaining 16 are decentralized, with staff in more than one office. However, in a given state, even decentralized DDSs have fewer field offices than SSA has. Since both state and federal offices will be handling claimants’ initial claims after redesign, SSA and DDSs may need to consider changing their current field operations to avoid overlapping areas of service within the same metropolitan area. States with DDS staff in one area, however, would need to relocate some of them or open new offices that are convenient to claimants throughout their states. Finally, because medical consultants are generally only located in DDSs, SSA will need to consider how to provide federal and state DCMs with access to medical consultants. Although the DCM work group recognized these concerns, it did not propose ways to deal with them in the upcoming accelerated DCM tests. According to the DPRT director, SSA has not yet addressed and resolved these concerns. SSA expects to recruit the approximately 11,000 DCMs, which it estimates will be needed, from its current staff of federal claims representatives and state disability examiners. However, some of these staff may be unwilling or lack the necessary skills to assume DCM responsibilities. In addition, SSA has not yet developed plans for providing technical and clerical support staff for the DCM position. SSA management estimates that it will need about 11,000 DCMs to process disability claims. SSA expects to recruit DCMs from its current staff of about 16,000 claims representatives and about 6,000 disability examiners. Although some claims representatives may process either retirement and survivor or disability claims, disability examiners only work on disability claims. According to DPRT team members, federal claims representatives who lack the interest or skills necessary to become DCMs will be able to continue processing retirement and survivor claims. In contrast, it is unclear what employment options will be available for state disability examiners who do not want to become DCMs since DCMs will make all disability decisions. Although SSA plans to recruit DCMs from the current ranks of claims representatives and disability examiners, SSA management will face various challenges doing so. Many SSA and DDS field office managers and staff, whom we interviewed, were skeptical about whether enough claims representatives and disability examiners would have the necessary skills to assume the additional responsibilities expected of DCMs. Claims representatives and disability examiners will need extensive training to learn each others’ job requirements. Further, disability examiners in California, Florida, North Carolina, and Wisconsin would prefer not to have direct contact with claimants because of the pressure of face-to-face interviews, they said. Currently, disability examiners generally make disability decisions based on a review of documents without face-to-face contact with the claimant. Some disability examiners also indicated that they were unwilling to become DCMs because they were not interested in performing the nonmedical tasks involved in processing a claim. According to the DPRT director, concerns about staff availability and the stress associated with the DCM position are valid. However, he stated, the potential for stress is not a reason for SSA to abandon the DCM position. In his opinion, SSA cannot focus solely on its staff and ignore its customers’ demands for improved service; further, the DCM test would consider the effect of stress and ways to alleviate it. However, during the first phase of the upcoming test, as proposed by the DCM work group, SSA would not test the face-to-face predecision interview, one of the major points of potential stress for staff filling the new position. SSA recognizes that DCMs will need the assistance of technical and clerical support staff to allow DCMs to perform their duties. Although DCMs will be responsible for handling most aspects of disability claims, SSA’s redesign plan calls for DCMs to “work in a team environment with internal medical and nonmedical experts...as well as technical and other clerical personnel....” For example, DCMs may need clerical help to assist in performing labor-intensive tasks associated with the processing of disability claims, such as processing mail and screening telephone calls. DCMs may also need access to medical and technical support personnel. Although no longer required on all cases, DCMs may need to obtain the opinion of medical consultants for certain cases. Similarly, DCMs may also need to call on technical support staff for assistance with claimant contacts, status reports, development of nondisability issues, and payment authorization. In November 1995, an initial report, from the DPRT work group on the DCM position, recommended that SSA create a new DCM assistant position to provide various types of support to DCMs. The work group recommended that SSA create one DCM assistant position for every two DCMs. Although SSA management did not agree to create this new position, management did agree to use existing personnel to staff DCM model test sites with appropriate technical and clerical support. However, this may be difficult for SSA because many of its field offices presently have few or no clerical staff. Even though the critical support features required for the DCM are unavailable, SSA’s decision to test the DCM position provides an opportunity to gather information about the position’s feasibility, efficiency, and effectiveness. Thorough data gathering and analysis will provide SSA with some of the key information it needs to determine whether the DCM position is the best way to serve the claimant population and protect the public trust. The DCM work group’s proposal—calling for evaluating the activity of the first group of DCMs 18 months into the test and using the evaluation results to make a decision on whether to proceed with additional testing, modify the DCM position, or cancel the position entirely—is sound. However, there are some limitations on what SSA can actually test relative to the DCM position at this time. Because the critical support features are not ready for testing, the test will not provide a complete picture of the DCM position’s feasibility, nor will it allow SSA to assess the relative costs and benefits of implementing the position. SSA will also not be able to assess the effects that improvements, such as technological enhancements and a simplified decision methodology, will bring to the overall disability claims process. The DCM work group’s consideration of delaying the predecision interview may also limit the value of the test. As SSA attempts to make a sound decision about further DCM testing or implementation of the DCM position, SSA would benefit from systematically assessing the results from all its DCM-related initiatives—the DCM tests, the model site tests, the Early Decision List, and sequential interviewing—and comparing their relative effects on SSA’s workforce, work flow, operating costs, and service to claimants. SSA may find that the results of some of these initiatives (1) increase decision-making efficiency and satisfy claimants more effectively than the DCM position or (2) may suggest better ways to satisfy claimant needs and reduce processing time. To facilitate the evaluation of all these initiatives, SSA needs to ensure that it has comparable test results for each of them. We recommend that the Commissioner of the Social Security Administration assess current efforts to test the DCM position, so as to ensure that SSA is provided with the best possible information for making future decisions about the position. Specifically, the Commissioner should include, in the test of the DCM position, a personal predecision interview that provides an opportunity for claimants to meet with the DCM in person, by video conference, or by telephone, and continue testing of sequential interviewing, Early Decision List, and model site initiatives throughout the DCM test. Testing and subsequent evaluations should document the extent to which the DCM position and the other initiatives increase service to the public and decrease processing time. At the end of the initial 18-month testing period and, if appropriate, at subsequent decision points, SSA should compare the evaluation results of the DCM and other initiatives with respect to their relative benefits and costs. SSA should consider these results before deciding to increase the number of DCM test positions and before approving the DCM position permanently. In its comments on this report, SSA generally agreed that we have identified the issues and concerns raised by the establishment of the new disability claims manager position. SSA also stated that it will make or has already made the changes we recommended to ensure the availability of the information necessary to assess the DCM position. Finally, SSA also stated that it plans to use results from other DCM-related initiatives to document the extent to which service to the public is improved and processing time is reduced. We believe SSA’s planned actions would be more effective if SSA included a predecision interview in its DCM test. We also believe that SSA should ensure that states’ evaluation of sequential interviewing initiatives can be compared with the results of the DCM and other related initiatives. SSA made a number of technical comments, which we incorporated as appropriate. The full text of SSA’s comments and our responses are included in appendix IV. We are providing copies of this report to the Director of the Office of Management and Budget and the SSA Commissioner. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. If you have any questions concerning this report or need additional information, please call me on (202) 512-7215. To determine how SSA planned to test and implement the DCM position, we interviewed and reviewed documents from key members of the Redesign Team at SSA’s headquarters in Baltimore, Maryland. We also conducted site visits in California, Florida, Georgia, North Carolina, and Wisconsin, where we (1) interviewed staff and managers of SSA field offices and state DDSs and (2) analyzed documents they provided. We judgmentally selected these locations because local SSA field offices and DDSs in these states have already experimented with a teaming initiative, so as to facilitate closer interaction between SSA claims representatives and DDS disability examiners. Although these initiatives were not part of SSA’s redesign plan, we believe the results provide some insight on how SSA could implement the DCM position. To identify the concerns associated with the DCM position, we spoke with the following during our site visits: DPRT members, SSA regional and field office managers and staff, employee union representatives, and DDS managers and staff. We also reviewed documents they provided us, which summarized their views on the DCM position. To determine whether SSA had ensured that it had an adequate staff to implement the DCM position, we interviewed and analyzed information from DPRT members, SSA field office managers and staff, and state DDS officials and staff. To identify how organizations with employee classifications similar to the DCM process claims, we also interviewed representatives from four private insurers, two affiliated trade associations, and a public utility. The following are GAO’s comments on the Social Security Administration’s letter dated August 16, 1996. 1. We modified our recommendation to reflect the different ways that a DCM could conduct a predecision interview with a claimant: face-to-face, by video conferencing, or by telephone contact. 2. We continue to believe that SSA should incorporate the predecision interview into the DCM test, beginning with the initial 18-month phase, to make the test as comprehensive as possible. Incorporating the predecision interview into the DCM test would provide SSA with valuable information for making future decisions about the feasibility of the DCM position and whether testing should continue beyond the first phase. In particular, testing the predecision interview could provide information about the effect of face-to-face interviews on office security, a main area of concern raised about the DCM position. SSA should not wait for the predecision interview to be tested as part of the expanded model site test. Results from this test are not expected until late in 1998 and may not be available in time for SSA to consider when it makes its decision about further testing or implementation of the DCM position. 3. We support SSA’s decision to provide an opportunity for the claimant to readily and easily contact DCMs participating in the test. Since SSA had already decided that claimants would have this access to the DCM, we modified one of the recommendations in the report. 4. We continue to be concerned that SSA may not have all the test results it needs to decide whether the DCM position should be fully adopted. SSA needs to ensure that states’ evaluation of sequential interviewing initiatives can be compared with the results from the initiatives that SSA is conducting and analyzing itself. We believe SSA’s test of the DCM position, combined with results of other related tests, should provide the basis for its decision on whether or not to implement the position. In addition to those named above, David G. Artadi coauthored the report and contributed significantly to all data-gathering and analysis efforts. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO assessed the Social Security Administration's (SSA) establishment of the disability claim manager (DCM) position, focusing on: (1) SSA efforts to test and implement the position; (2) major concerns about the position; and (3) SSA efforts to staff the position. GAO found that: (1) as envisioned by SSA, DCMs would be solely responsible for processing and approving initial disability claims, assume functions currently performed by at least three federal and state workers, and serve as a single, personal point of contact for claimants; (2) SSA has several initiatives under way to team claims representatives and disability examiners so that they can coordinate claims processing functions and prepare for transition to the DCM position; (3) although it has not yet implemented other initiatives and support features that are critical to the DCM position, SSA has decided to proceed with plans to test the DCM position; (4) a three-phase testing plan proposed by an SSA work group of management representatives, claims representatives, disability examiners, and federal and state union members may leave some important DCM features untested and does not have the support of all work group members; (5) concerns raised about the DCM position include the complexity of DCM responsibilities, compromises to safety and controls, salary differential between federal and state workers, and impact on field operations; and (6) SSA expects to recruit DCMs from its current staff of federal claims examiners and state disability examiners, but some staff may be unwilling or lack the necessary skills, and SSA has not developed a plan for providing technical and clerical support for DCMs. |
We reviewed RTC’s 1992 resolutions process to determine if it provided for compliance with the FDICIA least-cost requirements. We reported that three of RTC’s corporate policies raised compliance issues. These policies did not (1) ensure that uninsured depositors would absorb their shares of thrift losses if necessary to achieve least costly resolutions; (2) require RTC to evaluate other available resolution methods prior to selling the assets of thrifts in conservatorship; or (3) require RTC to estimate the cost of liquidating thrifts in conservatorship as of the earliest of three dates specified by the act, which is usually the date when RTC passes the failed thrift through a receivership and is appointed conservator. We also found numerous documentation shortcomings from our review of a sample of 1992 resolutions. For instance, RTC did not always fully document the bases of the evaluations of the resolution alternatives considered, including the consideration given to all nonconforming bids received from potential acquirers, as its procedures required. Further, RTC generally did not document the rationale for the marketing strategy it selected. We recommended that RTC evaluate the resolution methods that are potentially available before selling the assets of a failed thrift and make liquidation cost estimates at the earliest of the three dates specified by FDICIA. We also recommended that RTC document the consideration given all nonconforming bids and the rationale for the agency’s preferred marketing strategy for resolving a failed thrift. We made no recommendation concerning uninsured depositors, because RTC changed its policy in September 1993 to better ensure that uninsured depositors would absorb their shares of thrift losses if necessary to achieve the least costly resolution. RTC agreed to initiate actions to improve its documentation, but it maintained that its policies on asset sales during conservatorship and on the timing of its liquidation cost estimates were consistent with FDICIA. We said that unless RTC changed its policies in these areas, neither we nor RTC could assure Congress that RTC was fully complying with FDICIA’s least-cost requirements. Although over 1,300 savings associations failed from 1980 through 1992, failures since the beginning of 1993 have declined dramatically. Also, the December 17, 1993, passage of the RTC Completion Act (Public Law 103-204, 107 Stat. 2369)—which provided RTC the funds needed to resolve failed thrifts—has enabled RTC to resolve all but one of its backlog of thrifts in conservatorship as of December 31, l994. When a thrift fails, the Office of Thrift Supervision (OTS) or the thrift’s state chartering authority usually appoints RTC as conservator or receiver. As conservator, RTC operates a failed thrift pending its final resolution, and as receiver, it administers the closing of an insolvent thrift and liquidates all assets not disposed of in conservatorship or at resolution. However, some failing thrifts are resolved prior to being placed into conservatorship through the accelerated resolution program (ARP), which OTS operates jointly with RTC. This program enables OTS to place a thrift it considers to be in serious financial difficulty into ARP for the purpose of selling the troubled thrift’s assets, deposits, and other liabilities to a healthy institution before the thrift fails. During 1993, eight thrifts failed and were placed in RTC conservatorships, and one failing thrift was resolved through ARP. In 1994, no thrifts failed and two failing thrifts were resolved through ARP. Further, due primarily to funding provided by the RTC Completion Act, 80 of the 81 thrifts in RTC conservatorships as of December 31, 1992, as well as the 8 thrifts placed in conservatorships in 1993, were all resolved by the end of 1994. RTC officials told us they intend to resolve any further troubled thrifts via ARP or ARP-like transactions by selling the thrift’s assets, deposits, and other liabilities to a healthy institution prior to the thrift’s failure. They also said they expect few—if any—additional thrift failures through June 30, 1995, at which time RTC’s responsibility for resolving failed and failing thrifts ends. The Federal Deposit Insurance Corporation assumes responsibility for resolving troubled thrifts as of July 1, 1995. The primary objective of this, our second, annual review was to determine the extent to which RTC’s resolution process enabled it to comply with FDICIA requirements to select the least costly alternatives for resolving failed institutions. To address the objective, we judgmentally selected and reviewed three thrifts that were resolved between January 1, 1993, and June 30, l994. In each of these resolutions, RTC applied at least one of the three new resolution policies it established since January 1, 1993. The new policies involve (1) pro rata sharing of resolution losses by uninsured depositors when necessary to achieve the least costly resolution, (2) discontinuing the sale of performing loans during conservatorship, and (3) extending preference to minority bidders in making resolution decisions. We reviewed the three resolutions to determine whether RTC’s resolution process, as modified by these policy changes, provided for compliance with FDICIA’s least-cost requirements. We selected one of the three resolutions we reviewed because it was the only failed thrift that was affected by the September 1993 uninsured depositor policy change. It was also 1 of 14 failed thrifts in which RTC discontinued the sale of performing loans. We chose the other two resolutions because they involved preferences extended to minority bidders. In addition, one of the two failed thrifts was a major resolution with assets in excess of $1 billion. To address our objective, we analyzed the three resolutions, reviewed pertinent policies and procedures, and interviewed RTC officials and staff. We modified and used the data collection instrument we developed in our first review to document and evaluate the information from our three resolution cases, paying particular attention to the effect the three new policies had on the least-cost determinations. As in our first review, we collected data from the inception of resolution activity through the final resolution decision. We then compared the results of the three case studies with the results of our first-year case studies to identify any improvements or additional shortcomings in RTC’s resolution process. During our assessment of the three resolutions, we reviewed the accuracy of the financial calculations RTC used to estimate the cost of available resolution alternatives. However, due to the subjectivity inherent in the valuation of assets and in the estimation of future asset recoveries, we assessed the adequacy of RTC’s resolution process to select the least costly alternative. We did not determine whether, in fact, the least costly resolution alternative was selected, because the ultimate cost of a resolution cannot be identified until all remaining assets are sold and liabilities are paid by RTC as receiver, which generally takes several years. Further, the results of our review of the three resolutions are not generalizable to all of the resolutions done by RTC since January 1, 1993. RTC provided written comments on a draft of this report. The comments are summarized on page 8 and reprinted in appendix I. We did our work between June and October 1994 at RTC headquarters in Washington, D.C. Our work was done in accordance with generally accepted government auditing standards. RTC changed its corporate policies to require that uninsured depositors share in thrift losses if necessary to achieve least costly resolutions and to curtail its practice of selling performing assets during conservatorship operations. These changes brought RTC into compliance with FDICIA’s uninsured depositor requirements and enabled RTC to better conform with the act’s requirement that it evaluate other resolution methods before selling assets. In addition, RTC’s implementation of a policy to extend a preference to minority bidders when making resolution decisions appeared consistent with FDICIA’s least-cost requirements. Also, for the three resolutions we reviewed, RTC continued to select the resolution method it determined to be the least costly and took several actions in response to recommendations resulting from our first review that have enhanced its resolution process. Specifically, it improved the documentation of its marketing strategies, the consideration given to bids that did not conform to its preferred marketing strategies, and the bases for its resolution decisions. It also changed the timing of its liquidation cost estimates. Our review of the one resolution that involved the uninsured depositor policy change showed that consistent with FDICIA requirements, RTC paid uninsured depositors only that portion of their uninsured deposits equal to the expected pro rata share of the estimated proceeds from the resolution of the failed institution. RTC initially paid the uninsured depositors a 50-percent advance dividend, which was calculated by multiplying the book value of the assets by a percentage based on RTC’s historical asset recovery rates, and then RTC reduced that amount by an arbitrarily determined 18 percent to provide a conservative cushion. About 6 months later, RTC was able to pay an additional 24 percent on the basis of actual asset recoveries. This resolution also involved the policy change concerning the timing of asset sales during conservatorships. Prior to the change, RTC generally sold high-quality assets, such as marketable securities, investments, and performing loans, from thrifts in conservatorship through a process called “downsizing.” RTC believed this approach maximized returns on asset disposition and, as a general proposition, resulted in least-cost resolutions. However, we were critical of RTC’s downsizing policy in our report on 1992 resolutions, because the policy was at variance with FDICIA’s requirement that RTC evaluate other available methods of thrift resolution prior to selling assets. RTC’s March 1993 policy change generally required that high-quality assets be retained in conservatorships, although—except for certain performing assets such as one-to-four family mortgages—they could be sold within 45 days of the announced resolution date of a thrift. Thus, performing assets could be sold at or around the time the thrift was to be marketed, providing RTC greater opportunity to assess available resolution methods prior to commencing asset sales. RTC changed the policy primarily because it found that retaining high-quality assets provided conservatorships a better return than selling the assets and investing the proceeds in lower yielding securities. In our view, this policy change made good economic sense and enabled RTC to better conform with the FDICIA requirement that it evaluate available resolution methods before selling high-quality assets. Our review of the resolution case file showed that consistent with the revised policy, RTC retained high-quality assets in the conservatorship until close to the resolution date before selling them. We also found that RTC explored market interest in the thrift, selected the resolution alternative it determined to be the least costly, and adequately documented its marketing rationale and the bases for its resolution decision. RTC also made an initial liquidation cost estimate as of the date the thrift was placed in conservatorship to estimate the expected proceeds from resolution, which was necessary to determine the advance dividend to be paid to uninsured depositors. In addition, RTC made a second liquidation cost estimate, valuing assets based on its asset valuation review process, for purposes of determining the least costly resolution alternative. RTC officials told us they will follow this practice with future failed thrifts that are placed in conservatorship, but their intent is to resolve any further failing thrifts through ARP or ARP-like transactions. Either the new practice or ARP or ARP-like transactions will better provide for RTC’s conformance with FDICIA. We also noted improvements in RTC’s resolution process during our review of the case files of the two other thrift resolutions we selected. We found, for example, that RTC adequately documented the marketing rationale, the bases for its resolution decisions, and the consideration given to bids that did not conform to its preferred marketing strategy. It also selected the resolution alternative it determined to be the least costly. These two resolved thrifts were subject to RTC’s new policy, which gave a preference to offers from minority bidders for acquiring thrifts or their branches located in predominantly minority neighborhoods (PMN). This PMN policy, mandated by the RTC Completion Act, essentially required RTC to give a minority bidder the opportunity to match the high nonminority bid and thus become the winning bidder. The program’s premise was that the matching minority bid would result in the least possible resolution cost to RTC, since it is to be considered only after RTC has determined the least costly resolution alternative based on its review of all bids received. Our review of the two resolutions showed that RTC applied its PMN policy as designed, which was consistent with the least-cost provisions of FDICIA. In one of the resolutions, minority buyers were successful bidders for two of the five PMN offices of an entire thrift located in a PMN because their bids, considered with all other bids, produced the least costly resolution alternative. In the second resolution, the otherwise winning minority bidder for the thrift’s two PMN offices did not get the required regulatory approval, and thus its bids were disallowed. In both resolutions, RTC selected the bids that it determined to be the least costly alternative. RTC has made substantive improvements to its resolution process. It changed its treatment of uninsured depositors and now complies with related FDICIA requirements; it changed the timing of its sales of high-quality assets from thrifts in conservatorship and the timing of its liquidation cost estimates, thereby better providing for its conformance with the act’s requirements; and it improved various aspects of its resolution documentation, as we had recommended. RTC also continued to select the resolution alternatives it determined to be least costly for the three resolutions we reviewed, including the selection of alternatives for the two resolutions involving RTC’s new PMN program. As our review did not disclose significant noncompliance, and unless thrift failures accelerate by June 30, 1995, we do not plan to issue further reports on RTC’s compliance with the least-cost provisions of FDICIA. RTC, in its written comments on a draft of this report, agreed with the content and conclusions. RTC’s comments are reprinted in appendix I. We are sending copies of this report to RTC’s Deputy and Acting Chief Executive Officer; the Chairman of the Thrift Depositor Protection Oversight Board; the Chairman, Federal Deposit Insurance Corporation; and other interested parties. This report was prepared under the direction of Mark J. Gillen, Assistant Director, Financial Institutions and Markets Issues. Other major contributors are listed in appendix II. If there are any questions about this report, please contact me on (202) 512-8678. Jeanne Barger, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Resolution Trust Corporation's (RTC) compliance with a statutory requirement to: (1) resolve failed thrifts in the least costly manner; and (2) calculate and document its evaluation of alternative resolutions of failed thrifts. GAO found that: (1) RTC has improved its resolution process and curtailed its practice to sell performing loans during conservatorship in order to comply with the least cost requirement; (2) RTC policy to extend a preference to minority bidders when making resolution decisions appears consistent with the least cost requirement; (3) for the three resolutions it reviewed, RTC chose the resolution alternative it determined to be the least costly and, in response to GAO recommendations, adequately documented its marketing strategies and the bases for its resolution decisions; (4) where relevant, RTC has implemented changes to its corporate policies regarding the treatment of uninsured depositors and the timing of asset sales during conservatorships, which brought RTC into compliance with other statutory requirements as well as the least cost requirement; (5) RTC has changed the timing of its liquidation cost estimates so that it makes its initial estimate when a failed thrift is placed in conservatorship; (6) among other things, RTC initial liquidation cost estimates determine the amount of estimated losses uninsured depositors must absorb; and (7) RTC efforts to resolve failing thrifts through its accelerated resolution program have brought RTC into better conformance with the least-cost statutory requirement. |
Agencies across the federal government, such as the National Oceanic and Atmospheric Administration and the National Aeronautics and Space Administration, collect and manage many types of climate information, including observational records from satellites and weather monitoring stations on temperature and precipitation; projections from complex climate models; and other tools to make this information more meaningful to decision makers. Such information includes the following: Information and analysis about observed climate conditions. This includes information on, for example, temperature, precipitation, drought, storms, and sea level rise and how they may be changing in the local area. This type of information can be most easily conveyed by graphs and maps with some statistics on trends, variability, and data reliability. Information about observed climate impacts and vulnerabilities. This includes site-specific and relevant baselines of environmental, social, and economic impacts and vulnerabilities, resulting from observed changes in the climate against which past and current decisions can be monitored, evaluated, and modified over time. Projections of what climate change may mean for the local area. This includes, for example, projections based on easily understandable best- and worst-case scenarios with confidence intervals and probability estimates and examples of potential climate impacts. The projections may need to be downscaled from complex global-scale climate models to provide climate information at a geographic scale relevant to decision makers. Then, the information would need to be translated into impacts at the local level, such as how increased streamflow for a particular river may increase flooding. Information on the economic and health impacts of climate change. Observed and projected local impacts must be translated into costs and benefits, as this information is needed for many decision-making processes. Entities within the Executive Office of the President, such as the Council on Environmental Quality and the Office of Science and Technology Policy, have led specific government-wide climate information efforts, such as USGCRP’s May 2014 Third National Climate Assessment, which summarizes the impacts of climate change on the United States, now and in the future. Methods used to estimate the potential economic effects of climate change in the United States are based on developing research from a small but growing number of researchers. These methods are complex because they link different types of complicated climate and economic models to assess how projected changes in the climate could affect different sectors and regions. They produce imprecise results because of information and modeling limitations associated with (1) climate modeling uncertainty; (2) limited information on which to base models for specific economic sectors; (3) incomplete coverage of sectors, interactions among sectors, and climate change impacts; and (4) challenges of modeling over long time frames. Nonetheless, according to several experts we interviewed, the methods can convey useful insight into broad themes about potential climate damages across sectors in the United States. Methods used to estimate the potential economic effects of climate change in the United States are based on developing research being undertaken by a small but growing number of researchers, according to the literature we reviewed and several experts we interviewed. Researchers began developing methods to understand the economics of climate change starting in the early 1990s. These original methods— primarily designed to analyze the economic benefits and costs of reducing greenhouse gas emissions—typically assess the economic effects of climate change at a global or multinational scale, with little detailed information about specific regions or sectors within a country. As a result, some experts said that these original methods produce limited information about the economic effects of climate change within different sectors in the United States. Since the early 2000s, researchers have developed new methods that provide more detailed information about the economic effects of climate change in the United States. Advances in knowledge about the historical relationships between changes in temperature, precipitation, and other climatic variables and the economy; access to data and information about the physical impacts of climate change; and a growth in computing power, among other things, have enabled the development of methods to assess economic effects in specific sectors and regions of the United States, according to literature we reviewed. To date, the new methods have been used primarily to quantify the economic effects of climate change on certain economic sectors, such as agriculture, health, and energy, but the research has been expanding to include additional sectors, such as infrastructure and water resources. Only recently have studies analyzed the economic effects of climate change using frameworks that can compare effects across different sectors and regions within the United States on a national scale. According to many experts we interviewed, the following are the only two such national-scale research studies: American Climate Prospectus: This study was published in October 2014 by the Rhodium Group and assessed the economic effects of potential changes in temperature, precipitation, sea level, and extreme weather events on six sectors of the U.S. economy—coastal property, health, agriculture, energy, labor productivity, and crime— within different regions of the country. According to the study, its intent was to provide information on the probability, timing, and scope of a set of economically important climate change impacts comparable across sectors, rather than a conclusive answer about how much climate change will cost the United States. The study’s authors noted that they designed a research framework that could expand and improve as the climate science and economics fields continue to develop. Climate Change Impacts and Risks Analysis: This is an ongoing research project coordinated by the Environmental Protection Agency (EPA), which published a summary study in 2015. The goal of the study was to assess the extent to which reducing global greenhouse gas emissions may help avoid or reduce climate change impacts and adverse economic effects on six U.S. sectors—health, infrastructure, electricity, water resources, agriculture and forestry, and ecosystems—and enabled the comparison of climate risks across these sectors. According to the authors of the Climate Change Impacts and Risk Analysis study, the study estimated the benefits to the United States of global action on climate change. As such, the analysis presented in the report did not inform on alternative actions and did not constitute a benefit-cost assessment of actions to address climate change. In addition, EPA officials stated that the report was meant to convey broad themes about climate damages across sectors of the United States based on peer-reviewed data and methods. Like the authors of the American Climate Prospectus study, the authors of the Climate Change Impacts and Risk Analysis study noted in the report that the breadth and depth of the project, including the number of sectors covered, will expand in future work as the fields of climate science and economics continue to develop. According to EPA officials, this expanded research will contribute physical and economic information to USGCRP’s next National Climate Assessment. Methods used to estimate the potential economic effects of climate change in the United States are complex because, according to literature we reviewed and many experts we interviewed, they use different types of complicated climate and economic models that are linked together in a sequential framework that uses the results of one model as input to another. The different types of climate and economic models include the following: Climate models: Climate models are mathematical representations of physical, chemical, and biological processes in Earth’s climate system, including the atmosphere, land surface, ocean, and sea ice. These models use scenarios of future greenhouse gas emissions as input, such as a scenario in which current trends in greenhouse gas emissions continue or a scenario in which future emissions are reduced. Based on these scenarios, the models simulate future changes in climate variables, such as changes in temperature and the amount of precipitation. In the United States, global-scale climate models are developed at federally funded institutions, such as the National Center for Atmospheric Research. The American Climate Prospectus and Climate Change Impacts and Risk Analysis studies both used climate models from the National Center for Atmospheric Research, including the Model for the Assessment of Greenhouse- gas Induced Climate Change and the Community Atmosphere Model. Economic models for individual sectors: These models estimate the direct economic effects in certain sectors from changes in climate variables, such as temperature, and related climate impacts, such as sea level rise. Some economic models for individual sectors are based on relatively new econometric research that uses historically observed relationships between climate variables and economic effects to assess the potential economic effects of climate change on certain segments of the economy. For example, the American Climate Prospectus study used analyses of the historical relationships among temperature and changes in mortality, labor productivity, and violent crime, among other things, to project the economic effects of climate change. Other types of sector-specific models use known or theoretical relationships among climate variables and economic effects to make projections. These types of process-based models include, for example, the Forest and Agricultural Sector Optimization Model, used in the Climate Change Impacts and Risks Analysis study, which estimates changes in market outcomes associated with projected impacts of climate change on U.S. crop and forest yields. The 2015 Climate Change Impacts and Risks Analysis report included 18 process-based models and 2 econometric models, according to EPA officials. Also, a version of the U.S. Energy Information Administration’s National Energy Modeling System, maintained by the Rhodium Group and used in the American Climate Prospectus study, models the impact of changes in temperature on energy demand, power generation, and electricity costs. Economy-wide models: These models—called Computable General Equilibrium (CGE) models—can help assess how the entire economy, including individual sectors or regions, might react to the impacts of climate change and how their reactions can have implications for other sectors and regions. For example, as a result of changes in climate (e.g., higher temperatures), increases in energy demand and costs can increase the price of a wide range of goods, and decreases in crop yields in Iowa can affect food prices nationwide. As they encompass multiple sectors in a model of the U.S. economy, CGE models can more fully account for interactions between sectors than individual sector models can, potentially affecting findings on the effects of climate change. The American Climate Prospectus used a CGE model to examine how these types of interactions among sectors affect the magnitude and regional variation of effects on the sectors analyzed in the study. According to EPA officials, although the Climate Change Impacts and Risk Analysis study did not use a CGE model to analyze interactions among sectors, some interaction between sectors was analyzed. For example, water supply and availability projections from the water balance model were used to inform irrigation supply in the agricultural sector. Figure 1 provides an example of how climate models, economic models for specific sectors, and economy-wide models can be linked together sequentially in a framework to estimate the economic effects of climate change. While the two national-scale studies of the economic effects of climate change across sectors in the United States use sequential modeling frameworks similar to the one shown in figure 1, other methods—referred to by several experts we interviewed as complex integrated assessment models—also incorporate feedback between the different climate and economic modeling components. Such models include the Integrated Global System Model, developed at the Massachusetts Institute of Technology, and the Global Change Assessment Model, developed at the Pacific Northwest National Laboratory. Some experts we interviewed noted that these complex integrated assessment models have traditionally been used to analyze the effects of different policies on the energy sector. The models currently have limited capability to quantify economic effects on individual sectors, according to some experts we interviewed. For example, some experts we interviewed said that the Integrated Global System Model can roughly quantify the economic effects of climate change in the health and agriculture sectors. According to the literature we reviewed and many experts we interviewed, methods used to estimate the potential economic effects of climate change in the United States, and the national-scale methods that use the methods, produce imprecise estimates of economic effects because of data and modeling limitations associated with (1) climate modeling uncertainty; (2) limited information on which to base models for specific economic sectors; (3) incomplete coverage of sectors, interactions among sectors, and climate change impacts; and (4) challenges of modeling over long time frames. According to a 2012 National Academies report, climate models have advanced over the decades to provide much information that can be used for decision making today, but there are and will continue to be large uncertainties associated with climate modeling. According to literature we reviewed, future greenhouse gas emissions are one key source of uncertainty because they will depend on factors that are extremely challenging to predict decades into the future, such as rates of economic and population growth, technological developments, and policy decisions. Climate models use as input different scenarios that represent a range of potential future greenhouse gas emissions. These scenarios are based on various actions that could be taken to reduce future emissions, such as particular policies initiated by the international community. For example, the Climate Change Impacts and Risk Analysis study used a scenario based on significant global action being taken to reduce future emissions. The study does not specify what significant global action would cost the United States, or what it would entail, and such action may or may not occur. concentration. In its 2013 Fifth Assessment Report, the IPCC estimated that the likely range for climate sensitivity is from 1.5 to 4.5 degrees Celsius. The report also indicated that a “best” estimate could not be determined. Prospectus study incorporated a range of values for climate sensitivity in its analysis, and the Climate Change Impacts and Risk Analysis study generally used a single value to represent the sensitivity of the climate to rising greenhouse gas concentrations. The methods rely on limited information that can be used to model the relationships between climate and society, requiring assumptions about how society will respond to future changes in the climate. For example, some sector-specific models assume that historical observed relationships between weather events and economic output variables— such as between temperature and crop production—will represent the effects of long-term climate change. However, over the long time periods under which climate change is expected to occur, individuals, businesses, and government institutions may develop new approaches or technologies to adapt to climate change, lessening its economic effect. For example, one expert said that farmers may respond by making different crop choices. On the other hand, future climate change may have effects that are not revealed in historical events. According to one study, the likelihood that the climate will produce unprecedented effects— for example, heat so extreme that it can induce heat stroke in healthy individuals—will increase as temperatures rise outside the realm of past human experience. Similarly, data showing how populations will adapt to climate change are limited, so the methods use different assumptions about the extent to which society will adapt to climate change in different sectors. For instance, the Climate Change Impacts and Risk Analysis study assumed that for some sectors, such as agriculture, cost-effective adaptation actions will be taken, such as adjusting the type of crops grown in a region. For the coastal sector, the study considered four adaptation strategies: beach nourishment (adding sand), property elevation, shoreline armoring (using physical structures to protect from erosion), and property abandonment. However, for other sectors, such as the labor sector, the study did not take into account potential adaptation measures—such as using potential technological advances to reduce exposure—that could reduce future economic effects. The American Climate Prospectus study generally assumed that no adaptation would occur in response to climate change. Also, the methods might not incorporate potential market inefficiencies. For example, in the Climate Change Impacts and Risk Analysis, the coastal sector analysis does not consider how subsidized insurance might affect adaptation actions. If insurance prices do not reflect actual risks—such as in the presence of insurance subsidies—insurance availability might disincentivize adaptation actions. The methods have not included all sectors because the U.S. economy is complex and the information available for different sectors and climate impacts varies. Typically, studies using the methods include sectors for which the most information about climate impacts and economic effects is available. For example, both the American Climate Prospectus and Climate Change Impacts and Risk Analysis studies selected sectors based on whether sufficient information and modeling methods were available for the sector and the potential for impacts in the sector to affect the country as a whole, among other things. In addition, the methods do not fully cover some of the sectors that are included. For example, the American Climate Prospectus study’s analysis of the agricultural sector included the impacts of temperature and precipitation changes on the largest commodity crops—maize, wheat, soy, and cotton—but not on fruits, vegetables, nuts, or livestock, which dominate the agricultural sectors in some states. Furthermore, the methods do not always capture interactions between sectors that may influence economic effects. Such interactions include the ability of capital and labor to move between sectors in the economy, potentially lessening the economic effects of climate change; the impact of changes in water supply on the cost of electric power generation; or the effects of an extreme event cascading throughout a region over time by redistributing the workforce or raising the cost of capital. Finally, the methods do not include potential impacts that fall outside of the market economy—such as the loss of species from ecosystem disruptions and threats to endangered historical or cultural monuments from rising sea levels or more intense storms—because many of these impacts are difficult to quantify in monetary terms. Modeling the effects of climate change is challenging because, among other things, it often involves projections over long periods into the future, and these projections become more uncertain over time. For example, the American Climate Prospectus and Climate Change Impacts and Risk Analysis studies both included projections of economic effects though the end of this century, but how the economy will evolve and how society may respond to climate changes over such time frames is inherently uncertain. As a result of this high degree of uncertainty, the methods require that modelers make assumptions about these factors. For example, the American Climate Prospectus study assumed that the structure of the U.S. economy would remain as it is today—an assumption the study notes is almost guaranteed to be wrong—and therefore provided a projection of the effect of potential climate changes through the end of this century on today’s economy, as opposed to projecting these effects on the economy of the future. The Climate Change Impacts and Risk Analysis study made assumptions about future economic growth and labor productivity growth but did not report the sensitivity results associated with this and other key economic assumptions. Challenges also arise with discounting future benefits and costs, particularly when modeling over long time frames. According to OMB, benefits or costs that occur sooner are generally more valuable than those that occur later. However, according to the literature reviewed and some experts interviewed, the appropriate discount rate to apply when considering benefits and costs across generations, such as those associated with climate change, is subject to much debate. According to one of its authors, this debate was one reason why the American Climate Prospectus study did not present its estimates in discounted terms. For several sectors, the Climate Change Impacts and Risk Analysis study presented some estimates in discounted present value terms consistent with OMB and EPA guidance but presented undiscounted estimates of economic effects for all sectors for 2050 and 2100. Nevertheless, climate change could have both positive and negative potential economic effects at different points in time in the future. Discounting is a way to account for differences in the timing of these effects. As a result of the challenges of modeling over long time frames, economic analyses may assess the uncertainty in assumptions and data used in making long-term projections. For example, according to one author, the American Climate Prospectus study provided ranges of estimated economic effects for each sector to help account for uncertainty associated with the underlying climate and economic models, such as uncertainty in climate sensitivity. The Climate Change Impacts and Risk Analysis study primarily reported results as point estimates, not providing a range of estimated effects, and reported on only a limited assessment of uncertainty. The authors of the study further acknowledged that exploration of the uncertainties and limitations throughout the study, including the development of ranges for all impact projections, would strengthen the Climate Change Impacts and Risk Analysis study’s results. Several experts we interviewed noted that even though the methods produce imprecise results, they can convey useful insight into broad themes about potential climate damages across sectors in the United States. For example, according to several experts we interviewed, these methods can provide valuable research information about the potential magnitude of economic effects and potential areas of greatest concern, including where assets may be at greatest risk. Some other experts told us that using the methods can help identify areas where additional research would be most useful. Finally, another expert said that exploring differences among the results from various models and scenarios can help researchers explore and better understand some of the factors that drive the potential economic effects of climate change. Recent and emerging research could produce additional insight and begin to address some of the limitations of the methods, including those related to incomplete coverage of sectors and climate impacts, according to some experts we interviewed. For example, a new study published in June 2017 by almost all of the same authors of the American Climate Prospectus study and others expands on the research of the American Climate Prospectus study and provides additional insight into the potential economic effects of climate change in particular sectors and regions of the United States by examining county-level effects. In addition, since the 2015 Climate Change Impacts and Risk Analysis summary study was published, EPA has expanded the research project to enhance the analysis of sectors covered in the 2015 report; expand analyses of adaptation for some of these sectors; and include additional sectors such as winter recreation, Alaskan infrastructure, and rail. According to EPA officials involved in the study, they plan to publish a study summarizing these new modeling analyses, estimating impacts across 24 sectors in conjunction with the Fourth National Climate Assessment. The two national-scale studies—the American Climate Prospectus and the Climate Change Impacts and Risk Analysis—and many of the experts we interviewed suggested that although the methods are developing and produce imprecise results, the potential economic effects of climate change could be significant in many sectors across the U.S. economy and unevenly distributed across U.S. sectors and regions. The national-scale studies and many experts we interviewed suggested that climate change could result in significant economic effects in the United States, and the studies indicated that these effects will likely increase over time for most of the sectors analyzed. As shown in table 1, the American Climate Prospectus study estimated net costs in the near term for most of the five sectors analyzed and net costs by the end of the century for almost all of the six sectors analyzed. For example, the study projected potential economic costs from climate change impacts such as damage to coastal property from storms, decreases in labor supply from higher temperatures, and increases in energy expenditures for air conditioning. The study estimated that the likely combined direct economic effects of the six sectors could reach 0.7 to 2.4 percent of the U.S. gross domestic product per year by the end of this century. In all sectors analyzed, estimated net economic costs increased over time, becoming greater by late in the century. Specifically, for all sectors that have net economic costs at the lower and upper bounds of the likely ranges of economic effects, the study indicated a projected increase from about two to four times from mid-century to late century. For example, the study estimated that coastal property losses from sea level rise and increases in the frequency and intensity of storms could range from $4 billion to $6 billion per year in the near term (i.e., 2020 through 2039), increasing to a range of $51 billion to $74 billion per year by late century. According to several experts we interviewed, the estimates presented in the study are not precise and may be underestimated because the study did not quantify all known climate impacts. While the results of the Climate Change Impacts and Risk Analysis study cannot be directly compared with those of the American Climate Prospectus study, the Climate Change Impacts and Risks Analysis study also suggested that climate change could have significant economic effects on several of the economic sectors analyzed, and that those effects would increase by the end of the century. The results of this study, shown in table 2, were primarily presented in terms of the benefits associated with significant global action to reduce greenhouse gas emissions. According to EPA officials involved in the study, the results highlighted sectors with potentially higher economic effects of climate change. For some sectors, the study estimated the costs of climate changes without any emissions reductions. For example, the study reported $5.0 trillion in economic costs to coastal property from climate change through 2100 (discounted at 3 percent). However, the study did not explain how these estimated costs were obtained, and these estimated costs did not match those reported in the underlying journal papers. EPA officials told us that the scenario that led to this estimate was added as a result of reviewer comments. According to the two national-scale studies and several experts we interviewed, potential economic effects could be unevenly distributed across sectors and regions. First, the studies and some experts suggested that climate change will affect certain sectors more than others. The results of the American Climate Prospectus study suggested that nationwide economic effects on sectors, including human health, labor, coastal infrastructure, and energy, could exceed the economic effects on the agriculture and crime sectors. The factors driving the economic effects on the health, labor, coastal infrastructure, and energy sectors included costs associated with, respectively, (1) an increase in premature mortality from higher temperatures, (2) reduced number of hours worked because of high temperatures, (3) infrastructure damage from increased flooding and storm surge, and (4) increased energy demand. In the near term, the annual sector-specific economic effects reported in this study for 2020 through 2039 varied from a range of $8.5 billion in benefits to $9.2 billion in costs for the agriculture sector up to a range of $0.1 billion to $22 billion in costs from changes in labor productivity. In the long term, for 2080 through 2099, the annual sector- specific economic effects reported in this study varied from a range of $12 billion in benefits to $53 billion in costs for the agriculture sector up to a range of $90 billion to $506 billion in mortality costs for the health sector. The Climate Change Impacts and Risk Analysis study suggested that the benefits from emissions reductions would affect some sectors more than others. For example, among the sectors analyzed, the study reported that emissions reductions would generate relatively larger effects in 2050 for sectors relating to human health, water resources, and electric power. The factors driving the estimated economic effects in this study included lost labor hours and premature mortality from poor air quality and extreme heat in the health sector, costs to water users—such as domestic and industrial water users—when sufficient water is not available, and costs to expand power system capacity in the energy sector. Another difference in the economic effects across different sectors identified in the studies is that adaptation actions can reduce the negative economic effects of climate change in particular sectors, according to the national-scale studies and several experts we interviewed. For example, the Climate Change Impacts and Risk Analysis study reported that protective adaptation measures—such as beach nourishment, property elevation, shoreline armoring, and property abandonment—can reduce projected coastal property damage in the contiguous United States. In addition, some experts we interviewed said that adaptation actions in coastal areas can be cost effective. However, according to the studies and some experts, information on the cost-effectiveness of adaptation actions in many other sectors remains limited. With regard to variation across regions, the studies suggested that the economic effects of climate change could be more significant in some geographic areas than others. For example, the American Climate Prospectus study reported that depending on the specific climate impacts evaluated, the combined direct net economic effects for each state could range from annual benefits of 0.8 to 4.5 percent of economic output in Vermont to annual costs of 10.1 to 24 percent of current economic output in Florida by the end of the century. In the Tampa Bay, Florida, area alone, the Climate Change Impacts and Risk Analysis study estimated that damage to coastal property from sea level rise and storm surge could reach $2.8 billion per year by 2100. Figure 2 shows examples of potential economic effects in different U.S. geographic areas. According to the American Climate Prospectus study, the Southeast, Midwest, and Great Plains regions will likely experience greater combined economic effects than other regions, largely because of coastal property damage in the Southeast and changes in crop yields in the Midwest and Great Plains. The Climate Change Impacts and Risk Analysis study also reported economic effects in particular regions. For instance, according to the study, ocean acidification in the Pacific Northwest is already affecting shellfish harvests, which the study projected could decline by 32 to 48 percent by the end of the century in a scenario without emissions reductions. In addition, under the same scenario, the study estimated that wildfires could burn an additional 1.9 million acres annually in the Rocky Mountains by the end of the century, compared to today, which would significantly increase wildfire response costs. Some experts noted the importance of considering the economic effects of climate change in specific sectors and regions because nationwide estimates can average out some important differences. For example, in the agricultural sector, climate change could cause economic benefits in northern regions of the country from moderate warming, which could offset some agricultural economic losses from more extreme heat in southern regions. Information on the potential economic effects of climate change could help federal decision makers better manage climate risks, according to leading practices for climate risk management and economic analysis we reviewed and the views of several experts we interviewed. Several experts we interviewed said that existing information on the potential economic effects of climate change could help federal decision makers identify significant climate risks to the federal government. Further, additional economic information could help federal, state, local, and private sector decision makers manage climate risks that drive federal fiscal exposure. Even though existing information on the potential economic effects of climate change, such as that from the two national-scale studies, is imprecise, it is a first step toward effective climate risk management at the federal level. Several experts we interviewed said federal decision makers could use the insight this information provides about economic damages in various sectors or regions for different scenarios. Along with other available information about current and future climate risks, collectively this information could start informing federal decision makers about climate risks in different sectors and identifying areas of high fiscal exposure. For example, several experts we interviewed said that existing research indicates that infrastructure in coastal areas faces high financial risks relative to the risks posed to many other sectors or geographic regions. In addition, according to some experts we interviewed, projections about adverse economic effects in coastal areas, when considered with other information—for example, disaster costs already incurred such as the approximately $50 billion appropriated for recovery from Hurricane Sandy—could help decision makers better understand the potential magnitude of risks to coastal areas and identify vulnerable coastal infrastructure as a source of potentially high fiscal exposure. Such a first step in risk assessment is consistent with leading practices for climate risk management and federal standards for internal control. The National Academies’ 2010 leading practices state that managing risk in the context of climate change involves using the best information, including economic information, to assess risks and determine priorities for managing them. Further, in its 2010 report, the National Academies concluded that an iterative process—in which decisions are based on an evolving understanding of the underlying natural and social science—can improve decisions related to climate change risk management because of the opportunities it offers for considering uncertainty. This is consistent with what we reported in December 2016—that the first steps in developing enterprise risk management involve identifying and assessing risks to understand the likelihood of impacts and their associated consequences. As we found in that report, federal managers often handle complex and risky missions, such as preparing for and responding to natural disasters and building and managing safe transportation systems. While it is not possible to eliminate all uncertainties associated with these missions, risk management strategies exist to help federal managers anticipate and manage risks. In addition, under federal standards for internal control, management—in this case, the federal government—should identify, analyze, and respond to risks related to achieving the defined objectives. For example, management estimates the significance of a risk by considering the magnitude of impact, likelihood of occurrence, and nature of the risk—which provides a basis for responding to the risks—and management may need to conduct periodic risk assessments. Our past work and the work of others have reported that climate change impacts and their economic effects have already cost the federal government money and pose future risks that could lead to increased federal fiscal exposure. As we concluded in our October 2009 report, given the potential magnitude of climate change and the lead time needed to adapt, preparing for these impacts now may reduce the need for far more costly steps in the decades to come. For example, we reported in our February 2013 High-Risk update that federal disaster aid functions as the insurance of last resort in certain circumstances, increasing the federal government’s fiscal exposure to a changing climate. We also reported in December 2014 that from fiscal years 2004 through 2013, the Federal Emergency Management Agency obligated about $95 billion in federal disaster assistance for 650 major disasters declared during this time frame. Then, in July 2015, we reported that the federal government does not adequately plan for disaster resilience and that most federal funding for hazard mitigation is available after a disaster. Even with the magnitude of these disaster recovery costs, the federal government does not have government-wide strategic planning efforts in place to help set clear priorities for managing significant climate risks before they become federal fiscal exposures. The federal government has not undertaken strategic, government-wide planning to manage climate risks, using the best available information, including information on the potential economic effects of climate change, to identify and assess significant risks. In May 2011, we found that a government-wide strategic planning process could enhance how priorities for an overall federal response to climate change are set and recommended that the Executive Office of the President establish federal strategic climate change priorities. The Executive Office of the President has not implemented this recommendation. Later, in July 2015, we found that the federal government had no comprehensive, strategic approach to identifying, prioritizing and implementing investments for disaster resilience. This report concluded that a strategy to guide federal investments in disaster resilience could result in more effective returns on these investments. Building disaster resilience can include taking actions to adapt to the effects of climate change, as we found in May 2016. In addition, in our February 2015 High-Risk update, we reported that federal officials do not have a shared understanding of strategic government-wide priorities related to climate change, which along with other issues, limits the federal government’s ability to manage climate risks. In February 2017, we found that federal agencies had undertaken various strategic planning efforts, but it was unclear how they related to each other or whether they amounted to a government-wide approach for reducing federal fiscal exposures. Subsequently, a March 2017 Executive Order rescinded some of these planning efforts and created uncertainty about whether other planning efforts would continue or take their place. The National Academies’ 2010 leading practices state that climate change risk management efforts need to be focused where immediate attention is needed and that, by prioritizing federal climate risk management activities well, the federal government can help to minimize negative impacts and maximize opportunities associated with climate change. In addition, most experts we interviewed told us that federal decision makers should prioritize risk management efforts on significant climate risks that create the greatest fiscal exposure. By using information on the potential economic effects of climate change to assess and identify significant climate risks and craft appropriate federal responses, the federal government could take an initial step in establishing government- wide priorities to manage significant climate risks, which we recommended in May 2011 to reduce federal fiscal exposure and continue to believe is important. This initial step could include establishing a strategy to identify, prioritize, and guide federal investments to enhance resilience against future disasters, as we recommended in July 2015. To achieve the ultimate objective of establishing government-wide priorities, decision makers need information on policy alternatives that are representative of all available alternatives and their economic effects, such as benefits and costs. The authors of the American Climate Prospectus study highlighted, for instance, that national decision makers must weigh the potential economic and social impacts of climate change against the costs of policies to reduce emissions or make our economy more resilient. Further, EPA officials stated that using information from national-scale economics reports to make policy choices would involve a number of intermediate analytical steps, including (1) estimating the federal risk exposure from the national or regional estimates, (2) identifying policy options, and (3) analyzing the costs and benefits of those options. The relevant point for decision makers, according to these EPA officials, is that multisector, national estimates of climate damages can be made available for use, though additional analysis may be needed for specific policy actions. A strategy to identify, prioritize, and guide federal investments to enhance resilience against future disasters could include additional information on the economic effects of climate change. Such economic information could help inform future efforts by federal, state, local, and private sector decision makers to manage climate risks, according to a 2010 National Academies report, our prior work, literature we reviewed, and several experts we interviewed. The 2010 National Academies report, literature we reviewed, and several experts we interviewed noted that to make informed adaptation choices, decision makers need more comprehensive information on economic effects to better understand the potential costs of climate change to society and begin to develop an understanding of the benefits and costs of different adaptation options. In addition, economic guidance generally states that investment decisions—which would include decisions about adaptation investments—should be informed by a consideration of both benefits and costs of relevant alternatives. For example, OMB has issued guidance on using benefit-cost analyses to help federal agencies efficiently allocate resources through well-informed decision making. This guidance includes OMB Circular A-94, which directs agencies to follow certain economic guidelines for benefit-cost and cost-effectiveness analyses of federal programs or policies to promote efficient resource allocation through well-informed decision making in certain circumstances. The American Climate Prospectus study also recognized the importance of balancing benefits and costs, stating that national policy makers must weigh the potential economic and social impacts of climate change against the cost of the policies to manage climate risks. When it comes to managing climate risks through adaptation, the literature we reviewed and several experts we interviewed noted that a full understanding of the adaptation alternatives would require information on the economic effects of climate change impacts, how adaptation may lessen some of these effects, and the costs of adaptation. In our 2013 High-Risk update, we reported that the federal government has a role to play in providing information to decision makers so they can make better choices about adapting to climate change since their decisions can drive federal fiscal exposure. Moreover, we found in our 2015 High-Risk update that state, local, and private sector decision makers drive federal climate-related fiscal exposures because they are responsible for planning, constructing, and maintaining certain types of vulnerable infrastructure paid for with federal funds, insured by federal programs, or eligible for federal disaster assistance. Therefore, federal efforts to provide information to these decision makers could help them make more informed choices about how to manage climate risks, ultimately helping to reduce federal fiscal exposure. In our November 2016 report, we reported that these decision makers need climate information—including economic information—that represents the best available information and is updated over time. Some experts we interviewed noted that emerging research—which includes updates to the national-scale studies of the economic effects of climate change—will help fill information gaps. Recognizing that decision makers need more comprehensive economic information to manage climate risks, the National Academies recommended in 2016 that USGCRP integrate social, behavioral, and economic science into the National Climate Assessment to support decision-making processes. EPA officials told us that, as a step toward this integration, the agency’s updates to the Climate Change Impacts and Risk Analysis project advance the understanding of economic effects of climate change. The officials said that this information is documented in new analyses serving as input to the next National Climate Assessment. While several experts we interviewed noted that information on the economic effects of climate change is currently relatively sparse, they also said that new information is still emerging. Climate change impacts are already costing the federal government money, and these costs will likely increase over time as the climate continues to change. Even though existing information on the potential economic effects of climate change, such as that from the two national- scale studies, is imprecise, it could help identify significant potential damages for federal decision makers—an initial step in the process for managing climate risks. Under the National Academies’ 2010 leading practices, climate change risk management efforts need to be focused on where immediate attention is needed, and by prioritizing federal climate risk management activities well, the federal government can help to minimize negative impacts and maximize opportunities associated with climate change. The 2010 National Academies report, literature we reviewed, and several experts we interviewed noted that to make informed adaptation choices, decision makers need more comprehensive information on economic effects to better understand the potential costs of climate change to society and begin to develop an understanding of the benefits and costs of different adaptation options. By using information on the potential economic effects of climate change to help identify significant climate risks and craft appropriate federal responses—such as establishing a strategy to guide federal investment to enhance resilience against future disasters—the federal government could take an initial step in establishing government-wide priorities to manage significant climate risks. To help prioritize and guide federal investments, such a strategy could include developing more comprehensive information on the potential benefits and costs of different adaptation options. We are making the following recommendation to the Executive Office of the President: The appropriate entities within the Executive Office of the President, including the Council on Environmental Quality, Office and Management and Budget, and Office of Science and Technology Policy, should use information on the potential economic effects of climate change to help identify significant climate risks facing the federal government and craft appropriate federal responses. Such responses could include establishing a strategy to identify, prioritize, and guide federal investments to enhance resilience against future disasters. (Recommendation 1) We provided a draft of this report for review and comment to the Council on Environmental Quality, the Office of Science and Technology Policy, and EPA. The Council on Environmental Quality and the Office of Science and Technology Policy did not provide comments. EPA did not provide written comments on our findings and recommendation but instead provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of the Office of Science and Technology Policy, the Director of the Council on Environmental Quality, and the Administrator of the Environmental Protection Agency. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact J. Alfredo Gómez at (202) 512-3841 or gomezj@gao.gov or Oliver Richard at (202) 512-2700 or richardo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In this report, we examine (1) what is known about methods used to estimate the potential economic effects of climate change in the United States; (2) what is known about the potential economic effects of climate change in the United States; and (3) to what extent have leading practices and experts found that information about the potential economic effects of climate change could inform efforts to manage climate risks across the federal government. To address our audit objectives, we conducted a literature search for studies that (1) described the methods used to develop estimates of the economic effects of climate change in the United States and (2) produced estimates of such effects at a national scale, across different sectors and regions. We targeted the literature search to studies that were published in 2005 or later to encompass the 10 years of research preceding the start of our work. We identified relevant studies though three efforts: (1) searching literature databases, including Scopus, Web of Science, EBSCO, ProQuest, PolicyFile, and OCLC databases; (2) referrals from experts we interviewed during semistructured interviews (a discussion of these interviews is included below); and (3) reviewing citations in literature we reviewed. In total, we identified 30 studies that were relevant to our objectives and scope. We reviewed these studies to identify common themes related to the types of methods used to estimate the economic effects of climate change in the United States, the limitations of these methods, and what is known about the economic effects of climate change in the United States. Of the 30 studies identified that described methods to estimate economic effects, 2 included estimates of the potential economic effects of climate change in the United States on a national-scale, across different sectors and regions—the American Climate Prospectus study by the Rhodium Group and the Climate Change Impacts and Risk Analysis study by the Environmental Protection Agency. Many experts we interviewed confirmed that these two studies represented the best available estimates to date. To review the two national-scale studies, we used standard economic principles, similar to those embodied in federal and agency guidance, including a review of the statement of objective and scope, methodology, analysis of effects, sensitivity analysis, and documentation. Through this assessment, we identified several limitations that affect the precision of the studies’ results and are common to the methods used to estimate the economic effects of climate change that were identified in literature we reviewed and by experts we interviewed. We discuss these limitations in the report. Finally, we interviewed the authors of these studies to discuss the studies’ methodologies and limitations. In addition, to address our audit objectives we conducted 26 semistructured interviews with economists and other experts we identified through snowball sampling based on expert referrals. Specifically, we interviewed experts who (1) were recommended by at least one other expert, (2) authored at least one study identified through our literature review, (3) were available and agreed to meet with us, and (4) had a range of views and expertise needed to address our objectives. For example, we interviewed experts who were knowledgeable enough about methods to estimate the economic effects of climate change impacts that they could discuss strengths and limitations of these methods. Repeated recommendations of the same experts indicated that we reached saturation of the field and were identifying the appropriate experts. We reviewed experts’ curricula vitae—to the extent they were available—to ensure that their areas of expertise and research were relevant to the engagement’s objectives and that we were gathering the range of expertise that we needed, including expertise on the strengths and limitations of the methods discussed in this report. During these interviews, we asked experts about (1) methods used to develop estimates of the economic effects of climate change impacts in the United States; (2) strengths and limitations these methods may have; (3) what is known about the economic effects of climate change in the United States; (4) potential federal fiscal exposures that could result from these effects; and (5) how, if at all, information about potential economic effects of climate change could inform climate risk management across the federal government. We interviewed 23 out of the 26 experts in person in select geographic areas: Berkeley, California; Stanford, California; Boulder, Colorado; Boston, Massachusetts; Cambridge, Massachusetts; and Washington, D.C. Because this is a nonprobability sample, our findings cannot be generalized to other experts we did not interview. Rather, these interviews provided us with illustrative examples of methods used to estimate economic effects of climate change, what is known about economic effects of climate change in the United States, and ways information about the potential economic effects of climate change could inform efforts to manage climate risks across the federal government. In addition, the specific areas of expertise varied among the experts we interviewed, so not all of the experts commented on all of the interview questions we asked. Finally, to address our third audit objective, we reviewed leading practices and principles of risk management to identify key elements. We reviewed these practices and principles to identify how, if at all, economic information could be considered in risk management frameworks. National Academies’ leading practices on climate risk management characterize climate change adaptation as a risk management strategy, so we then identified how information about the economic costs and benefits of climate change could be considered to manage climate risks. We also reviewed our reports related to risk management and climate change to determine what federal actions could reduce fiscal exposure because of climate risks. We then determined how, if at all, what is known about economic effects of climate change could help implement or enhance these actions. We conducted this performance audit from December 2015 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual contacts named above, Joseph Dean Thompson (Assistant Director), Colleen Candrl, Lilia Chaidez, Ellen Fried, Cindy Gilbert, Tim Guinane, Anne Hobson, Jeanette Soares, Sara Sullivan, Kiki Theodoropoulos, and Michelle R. Wong made key contributions to this report. | Over the last decade, extreme weather and fire events have cost the federal government over $350 billion, according to the Office of Management and Budget. These costs will likely rise as the climate changes, according to the U.S. Global Change Research Program. In February 2013, GAO included Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks on its High-Risk List. GAO was asked to review the potential economic effects of climate change and risks to the federal government. This report examines (1) methods used to estimate the potential economic effects of climate change in the United States, (2) what is known about these effects, and (3) the extent to which information about these effects could inform efforts to manage climate risks across the federal government. GAO reviewed 2 national-scale studies available and 28 other studies; interviewed 26 experts knowledgeable about the strengths and limitations of the studies; compared federal efforts to manage climate risks with leading practices for risk management and economic analysis; and obtained expert views. Methods used to estimate the potential economic effects of climate change in the United States—using linked climate science and economics models—are based on developing research. The methods and the studies that use them produce imprecise results because of modeling and other limitations but can convey insight into potential climate damages across sectors in the United States. The two available national-scale studies that examine the economic effects of climate change across U.S. sectors suggested that potential economic effects could be significant and unevenly distributed across sectors and regions. For example, for 2020 through 2039, one study estimated between $4 billion and $6 billion in annual coastal property damages from sea level rise and more frequent and intense storms. Also, under this study, the Southeast likely faces greater effects than other regions because of coastal property damages (see figure). Information about the potential economic effects of climate change could inform decision makers about significant potential damages in different U.S. sectors or regions. According to several experts and prior GAO work, this information could help federal decision makers identify significant climate risks as an initial step toward managing such risks. This is consistent with, for example, National Academies leading practices, which call for climate change risk management efforts that focus on where immediate attention is needed. The federal government has not undertaken strategic government-wide planning to manage climate risks by using information on the potential economic effects of climate change to identify significant risks and craft appropriate federal responses. By using such information, the federal government could take an initial step in establishing government-wide priorities to manage such risks. GAO recommends that the appropriate entities within the Executive Office of the President (EOP), including the Office of Science and Technology Policy, use information on potential economic effects to help identify significant climate risks and craft appropriate federal responses. EOP entities and the Environmental Protection Agency did not provide official comments on the report. |
To function properly, the U.S. securities industry and capital markets require timely and accurate flows of electronic information. This information is transmitted through and processed within a vast network of computerized systems managed by stock, options, and futures exchanges; broker-dealers; banks; mutual funds; and various other organizations. These systems handle such tasks as displaying price quotations, routing orders to buy or sell, executing trades, and transferring securities and payments (clearance and settlement). In addition, SEC has internal systems that help it perform its regulatory responsibilities. All of these systems are potentially vulnerable to errors or malfunction as a result of the impending date changeover. The Year 2000 problem is rooted in the way dates are recorded and computed in many computer systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” to represent 1997, in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from 1900, 2001 from 1901, and so on. As a result of this ambiguity, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. For example, a broker-dealer with a system that is not compliant may be unable to receive payment information in January 2000 for securities that it sold in December 1999 if its computer systems fail to accept incoming data with a Year 2000 date. In a speech to international bankers, the president of the New York Federal Reserve Bank indicated that the Year 2000 software date change poses a major risk for world financial markets and that the world economy could be damaged if efforts to address the Year 2000 problem are not carried out correctly. SEC is the primary federal agency responsible for overseeing the securities markets in the United States. It promulgates regulations, reviews market operations, conducts inspections of market participants, and takes enforcement actions in response to violations of the securities laws and accompanying regulations. The securities laws allow SEC to delegate some of its responsibilities to the entities that operate the various stock and options markets as SROs. SROs develop and enforce rules for their members. They include the New York Stock Exchange, the National Association of Securities Dealers, and other regional securities exchanges that maintain the physical securities or their electronic equivalent. The SROs directly oversee their member broker-dealers, which buy and sell securities on behalf of customers. SEC oversees the SROs as well as investment companies that sell mutual funds, investment advisers who dispense investment advice or manage customer funds, and transfer agents who maintain records on behalf of companies that issue securities. Consequently, SEC and the SROs have primary responsibility for ensuring that Year 2000 problems in the securities industry do not adversely affect individual investors or the securities markets. Various organizations provide guidance for assessing, planning, and managing Year 2000 readiness programs. For example, we and other organizations, such as information technology consulting firms, have issued guidance for agencies and firms seeking assistance in formulating their Year 2000 remediation efforts. Our guidance on addressing the Year 2000 problem, contained in Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, Sept. 1997), incorporates guidance and practices identified by leading organizations in the information technology industry. Our assessment guide recommends that organizations proceed through a five-phased approach to resolving their Year 2000 computing issues. These phases are awareness, assessment, renovation, validation, and implementation. SEC appears to be using a similar approach but has organized its program into six phases by dividing the validation phase into internal testing and integrated testing. In May 1997, OMB issued a format for federal agencies to report on the progress of their Year 2000 efforts. Specifically, OMB has asked that each agency report its total number of mission-critical systems; the number that are currently Year 2000 compliant; and the progress made in replacing, repairing, or retiring those systems that are not yet compliant. Although the guidance applies to a large number of federal agencies, SEC was not one of the agencies required to report. The Securities Industry Association, an organization that represents a large segment of the securities industry, is playing an important role in coordinating the industry’s Year 2000 efforts. This association has established a steering committee—made up of representatives from various SROs, broker-dealers, investment companies, third party software vendors, and others—to develop a strategy for industry remediation and coordinated testing schedules. To evaluate how SEC’s report discussed the agency’s efforts to address the Year 2000 problem for its internal systems and identify any ways that future reports could be improved, we interviewed officials in SEC’s Office of Information Technology. We also reviewed internal reports, plans, and timetables concerning the agency’s efforts to repair its own systems. To evaluate how SEC’s report discussed the efforts of market participants to address the Year 2000 problem, we interviewed SEC officials in the various divisions and offices within the agency responsible for overseeing SROs, broker-dealers, investment companies, investment advisers, and other market participants. We also reviewed documents SEC had collected from market participants to assess what type of information the agency had analyzed and thus could summarize in future reports. In addition, we assessed the extent to which SEC’s report contained information that related to the various criteria set out in our own guidance for addressing Year 2000 issues and in the OMB guidance for selected federal agencies reporting on their Year 2000 efforts. We requested comments on a draft of this report from the Chairman, SEC. SEC provided written comments, which are discussed at the end of this report and reprinted in appendix II. SEC also suggested technical changes, which we incorporated where appropriate. We conducted our review from August 1997 through January 1998 in accordance with generally accepted government auditing standards. SEC’s June 1997 report provided an overview of its own and industry participants’ efforts to prepare for the year 2000. To assemble the report, SEC formed a task force that included representatives of each of its major operating divisions. These divisional representatives contacted various market participants under the representatives’ jurisdiction by letter or telephone, requested and reviewed documents provided by these participants, and discussed Year 2000 issues as part of on-site examinations of some participants. They compiled the report from the information provided and structured it to address the specific questions you raised in your December 6, 1996, letter that requested annual SEC progress reports. SEC’s report provided a high-level description of the status of Year 2000 remediation efforts for SEC internal systems, including detailed information on the status of SEC mission-critical systems. For mission-critical systems, the report discussed the total number of systems, how many are currently Year 2000 compliant, and how many are not compliant and will be either replaced or renovated. The report also provided SEC’s schedule for completing some of the phases of the remediation process for mission-critical systems. SEC did not report the status of its critical internal systems in relation to its six-phased approach for achieving Year 2000 readiness. Indicating the status of its critical systems in relation to the six phases would provide a more structured means to assess the progress SEC has made in addressing the Year 2000 problem for its internal systems. The report also described SEC’s efforts to promote awareness of the Year 2000 problem throughout the securities industry. It included a listing of the major organizations that SEC contacted within the securities industry and a description of how it coordinated its efforts with these organizations to ensure that systems throughout the securities industry are being readied for the year 2000. The organizations contacted included associations that represented SROs, broker-dealers, transfer agents, investment companies, and investment advisers. The report also provided a discussion of issues relating to public company financial statements, including auditing, auditor independence, and other accounting considerations. Finally, the report discussed SEC’s guidance to public companies regarding the extent to which these companies should include information in their public disclosure filings if the costs or consequences of the Year 2000 problem would have a material effect on reported financial information. Although it provided an overview of the status of its own and securities industry participants’ efforts to address the Year 2000 problem, the report did not identify those systems that might be critical to the continued functioning of the U.S. securities markets. Furthermore, it did not provide sufficient information about the timing and status of efforts by SROs, broker-dealers, investment companies, and other market participants to address their systems. In addition, it did not discuss what efforts will be made to address systems or organizations that have fallen behind schedule or what contingency planning is occurring to address systems that will not be ready in time. Such information is being required by OMB from other federal agencies and provides a more complete picture of Year 2000 readiness. According to our assessment guide, identifying and assessing mission-critical systems are important because an enterprisewide inventory of information systems and their components provides the necessary foundation for Year 2000 program planning. Identifying and addressing Year 2000 problems in critical systems are essential to ensuring that securities market operations continue without disruption and could also help market participants focus on their most critical systems as part of their overall efforts. Since May 1997, OMB has required selected federal agencies to report on the total number of mission-critical systems each has; the number of such systems that are currently Year 2000 compliant; and whether remaining systems are being replaced, repaired, or retired. SEC’s report identified the number of internal systems SEC considered critical to its operations, but did not provide similar information on market participants’ systems considered critical to the continued functioning of the U.S. securities markets. SEC officials said that they had determined whether SROs had conducted detailed inventories and identified critical systems because of the importance of these entities to the securities markets. The officials said that they generally had not collected similar information from market participants such as broker-dealers or investment companies because they had concentrated on ensuring that these participants were aware of and beginning to focus on Year 2000 problems. In addition, the officials said they also had begun identifying the steps these participants had taken to address the problems. However, they did not report the extent to which market participants’ systems had progressed through SEC’s six-phased process. An SEC official also told us that SEC did not include more detailed information on market participants’ systems in its report because the participants considered the information to be sensitive and SEC had promised to maintain its confidentiality. However, it may be possible to report more detailed information without compromising the confidentiality of data from specific market participants. One way to do so would be to report summary data by type of securities market participant, with separate breakouts grouping the numbers of systems managed by industry segments, such as SROs, broker-dealers, investment companies, investment advisers, or transfer agents. This would provide more detail without identifying specific data or market participants. To indicate the status of systems most likely to have a significant impact on the continued functioning of the U.S. securities markets, SEC could group the summary data by some measure of their size or importance to the market, such as the percentage of total market trading volume or market capitalization that each grouping represented. Appendix I shows examples of ways to report this information for the securities industry based on OMB’s suggested reporting format. SEC’s June 1997 report also did not indicate time frames that market participants are following for completing the various phases necessary to address the Year 2000 problem. For example, our assessment guide indicates that organizations should have been finished with the first two phases of the process—awareness and assessment—by around mid-1997 and should already have initiated activities to renovate systems with date-related deficiencies. According to SEC officials, they have generally asked market participants to describe the expected time frames associated with each organization’s Year 2000 readiness program, and SEC intends to track these organizations’ progress against these time frames as part of its oversight. For example, SEC intends to track most organizations against the time frames established by the Securities Industry Association, which it considered to be more aggressive than the time frames established by other organizations, such as OMB. However, this information was not included in the June 1997 report. Such information would provide an essential measure of progress for critical systems. SEC’s report also did not provide information concerning the steps to be taken to address systems or organizations that have fallen behind schedule in addressing the Year 2000 problem. OMB requires selected federal agencies to include exception reports in their annual and quarterly reports for mission-critical systems that are being replaced or repaired and are at least 2 months behind schedule. OMB expects these exception reports to include an explanation of why the systems are behind schedule, a description of what is being done to accelerate the effort, a new schedule for replacement or completion of the remaining phases, and a description of the funding and other resources necessary to achieve compliance. The reporting of such information allows OMB to make an assessment of whether the steps being taken to correct such systems are adequate for getting them back on schedule. SEC’s report also did not contain sufficient information to assess the level of contingency planning that it and market participants are conducting as part of preparing for the year 2000. SEC officials said that securities market participants were generally not far enough along in the overall Year 2000 process to be involved in detailed contingency planning yet, but recognized its importance. Because the year 2000 is less than 2 years away, contingency planning for systems that will not be ready is an important part of any organization’s preparations. As noted in our assessment guide, correcting the Year 2000 problem is difficult because systems frequently consist of multiple programs, operating systems, computer languages, and hardware platforms. Resolving date coding problems for computer systems is a labor-intensive and time-consuming process, and some systems, portions of systems, or instances of date dependencies may be overlooked during the remediation process. Therefore, having sound contingency plans, which involves identifying or designing alternative means for processing information, will be important for ensuring the continued functioning of the securities markets. Developing and reporting on such plans soon might help reveal certain alternatives or contingencies to be unworkable, too expensive, or otherwise impractical. Monitoring an organization’s efforts to ensure that its computer systems are ready will become even more critical as the year 2000 draws nearer. In this regard, annual reports from SEC may not provide sufficiently timely information. Recognizing the time-critical nature of the Year 2000 problem, OMB’s reporting guidance for selected federal agencies requests that these organizations provide quarterly reports on the status of their Year 2000 efforts. Other organizations are requiring even more frequent reporting. For example, the Treasury Department is requiring its bureaus to report their status monthly. More frequent reporting by SEC would help to identify any problems sooner and thus provide Congress and SEC additional time to take action should the need arise. Because SEC was primarily concerned with promoting and assessing awareness of the Year 2000 problem, its June 1997 report focused on the early stages of the industry’s preparations for the year 2000 and did not provide specific information on the status of particular systems. However, as the year 2000 approaches, information similar to that required by OMB, but reported more frequently, would provide a better indication of the progress being made to ensure the readiness of systems critical to the continued functioning of the U.S. securities markets. We recommend that the Chairman, SEC, include in SEC’s Year 2000 status reports to Congress information similar to that required of other federal agencies by OMB. Specifically, SEC reports should include information on the systems critical to the continued functioning of the U.S. securities markets; the progress made in moving critical systems through the various phases of achieving Year 2000 compliance; the time frames required to complete each phase of the process; the efforts necessary to address systems that are behind schedule; and the contingency plans for systems that may not be ready in time. SEC should also report such information more frequently, such as quarterly update briefings, to keep Congress informed as the year 2000 approaches. SEC provided us with written comments on a draft of this report. (See app. II.) SEC generally agreed with our recommendation that it report more specific, detailed information to Congress on the industry’s Year 2000 progress. SEC also agreed with our suggestions to focus particularly on the industry’s overall progress in moving its operations through the various phases of achieving Year 2000 compliance and on providing contingency planning information for the 1998 report. SEC also agreed that an annual report to Congress may not provide sufficiently timely information. It said that it is currently providing briefings to certain congressional staff and would be willing to include the staff of any member of Congress in such briefings. If made available to all interested Members and staff and conducted as frequently as needed, such briefings could meet the intent of our recommendation. SEC stated, however, that OMB reporting requirements are not a workable model for reporting on the systems of entities that SEC regulates. Specifically, SEC stated that it is not feasible to provide all the information required by OMB for the mission critical and non-mission-critical systems of every regulated entity in the securities industry because of the size of the industry, limited SEC resources, and the SEC’s sharing of oversight authority. We used the OMB reporting requirements as an example of how SEC could improve its reporting on the progress being made to ensure the readiness of systems critical to ongoing market operations. We did not intend that SEC report detailed information for the mission-critical and non-mission-critical systems of each regulated entity, although each entity should be tracking the progress of its own systems. We believe that, for Congress to have the information necessary to assess industry readiness, SEC needs to identify and provide detailed information on those systems that are critical to the functioning of the industry as a whole. Such systems likely include those related to trading, clearing, and other functions important to market operations, as well as those used by major market participants. We revised the text and recommendation to clarify our intent and discuss alternative ways to consolidate information about these critical systems in appendix I. For example, rather than reporting the status of systems for every member of an exchange or every broker-dealer, SEC could, at a minimum, report on the combined status of the systems for the major exchanges and largest broker-dealers. As arranged with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution of it until 15 days after the date of the letter. We will then send copies to other interested members of Congress, SEC, the New York Stock Exchange, the National Association of Securities Dealers, and other relevant organizations. Copies will be made available to others on request. Please contact me on (202) 512-8678 if you or your staff have any questions. Major contributors to this report are listed in appendix III. The following table represents a possible format for reporting information on the readiness of securities market participants’ electronic systems. Other equally acceptable reporting formats or means of presenting this information likely exist. The format presented here seeks to capture several key aspects of the information, including some measure of importance for the entities (such as percentage of market trading volume); the extent to which systems are already compliant; and for those that are not, how far along in the six phases of the Year 2000 readiness process they are. The names of individual organizations would not have to be identified but instead information could be combined and presented for groups of organizations, as shown. Further, the percentage of systems that have completed each Year 2000 phase may not accurately reflect the amount of work remaining to be done if the larger systems with more lines of code remain unfinished. In such cases, market participants could disclose more information to better describe the actual work remaining. Milestone: (date) Milestone: (date) Milestone: (date) Milestone: (date) Milestone: (date) Milestone: (date) John Stephenson, Assistant Director Gary Mountjoy, Assistant Director Karen Bell, Senior Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Securities and Exchange Commission's (SEC) report on the status of its efforts to ensure that its computer systems, as well as those used by participants in the securities industry, are ready for the date changeover in the year 2000, focusing on: (1) SEC's June 1997 report on the status of year 2000 compliance by SEC, the securities industry, and public companies to identify any ways that future reports might be improved; (2) the adequacy of SEC's oversight of the year 2000 remediation efforts directed at its internal systems, self-regulatory organizations (SRO), broker-dealers, and other regulated entities; and (3) the guidance SEC has provided to public companies for disclosing year 2000 remediation efforts. GAO noted that: (1) SEC's first report in June 1997 provided an overview of the efforts that SEC and various industry participants had made to address year 2000 issues, but did not contain the specific, detailed information that Congress will need to assess progress as the year 2000 approaches; (2) according to an agency official, SEC had collected more detailed information from some market participants, such as SROs; (3) the official said that SEC did not include this information in the report because SEC had been focused on assessing the extent to which market participants were aware of the year 2000 problem and had begun taking steps to address it; (4) the Office of Management and Budget's (OMB) reporting format offers guidance on the type of detailed information SEC might provide Congress in future reports; (5) such information includes: (a) the systems considered critical to the continued functioning of the U.S. securities markets; (b) the progress made in moving these systems through the various phases of achieving year 2000 compliance; (c) the timeframes required to complete each phase; (d) the efforts necessary to address systems that are behind schedule; and (e) the contingency plans for systems that may not be ready in time; and (6) also, as the year 2000 approaches and less time to make adjustments is available, SEC's yearly progress updates may be too infrequent for congressional needs. |
For decades, the Bureau has been considering how it could use administrative records to help reduce the decennial’s costs. We, the Bureau, and others have observed that some of the information needed for the census has already been collected by other government agencies in the course of administering their programs. Thus, accessing that information and using it to administer and, in some cases, provide the data for completing census forms, has the potential to reduce the cost of the decennial census, such as its expensive acquisition of temporary workspace and equipment to support fieldwork. Moreover, some of the information collected through administrative records could be more accurate than information collected through traditional methods, such as when respondents provide the Bureau with incomplete information (or no information at all), and when the Bureau’s enumerators need to interview neighbors or other “proxy” respondents to collect needed information. Depending on the source of the information, administrative records can help the Bureau with such things as whether a housing unit is occupied or vacant, improving the accuracy of the Bureau’s address list, and providing demographic information on household members. As far back as 1970, the Bureau has made limited use of administrative records to help enumerate group quarters, such as college dormitories and prisons. More recently, the Bureau conducted limited experiments during the 2000 Census and found there was potential to use administrative records to assist with follow-up and other operations, but that further research would be needed. Additionally, since 2000, the Bureau has used addresses provided by the U.S. Postal Service (USPS) Delivery Sequence File (DSF) as a starting point to update its Master Address File, a data file that contains a list of all known living quarters in the United States and Puerto Rico. For the 2010 Census, the Bureau used administrative records to help enumerate some group quarters and select cases for an operation that followed up on potentially inaccurate census responses. In 2009, the Bureau’s earliest planning for the 2020 Census considered a range of scenarios for using administrative records, from the most expensive option—a traditional census with extensive field follow-up with nonrespondents, and without increased use of administrative records—to the least expensive option—a census conducted entirely by administrative records. Considering concerns about cost and quality, the Bureau ruled out the extreme scenarios and began exploring a “hybrid” scenario that included a number of possible uses for administrative records. In planning for the 2020 Census, Bureau research and testing teams have been determining the possibilities, feasibilities, and cost and quality implications of various uses of administrative records, also including additional information, such as telephone numbers and addresses, obtained from commercial vendors. Earlier this year in Maricopa County, Arizona, the Bureau conducted its 2015 Census Test to see how well it can use administrative records to reduce fieldwork and increase productivity for NRFU. The test also included a new field management structure and an enhanced Operations Control System supporting daily reassignments of cases. As part of this effort, the Bureau tested how well administrative records substituted for additional visits to collect information from nonresponding households and from proxies, such as neighbors; compared the cost and productivity of traditional follow-up methods to those relying on an enhanced operational control system, demonstrating the potential benefits of automating the assignment of work, scheduling the time of day for enumerators to conduct follow-up to determine when residents were most likely to be home, as well as efficient sequencing and routing of enumerator daily visits based on administrative records and information from other surveys; and provided ground experience with prototypes of systems with leased smartphone devices on which to collect data, which are not necessarily reflective of the systems it may acquire or develop for 2020. A key benefit of a test like the 2015 Census Test is being able to identify potential problems with design alternatives. Untested systems and unpracticed procedures will inherently experience implementation issues that provide much of the basis on which lessons from such tests are typically drawn. The test results are one source of input to the Bureau’s preliminary design decisions for the 2020 Census. The Bureau included a description of its preliminary decisions in the 2020 Census Operational Plan it released on October 6, 2015. These decisions included using administrative records to identify vacant addresses in advance of follow- up field work and to enumerate nonresponding households when possible to reduce the need for repeated contact attempts during NRFU. The plan also described an updated lifecycle cost estimate for the 2020 Census, which we plan to review. The Bureau’s estimate of the total cost of the 2020 Census with the innovations it describes in its operational plan is $12.3 billion. The Bureau has more tests planned including a 2016 Census Test in selected areas within Harris County, Texas and Los Angeles County, California; a large test of address canvassing also in 2016; an additional site test in 2017 at an as yet undetermined location; and a 2018 end-to-end test—the equivalent of prior decennial cycles’ “dress rehearsal.” In key planning documents, the Bureau describes a goal of using administrative records to reduce the field work involved in its NRFU operation. To that end, the Bureau plans to use data from internal and external sources, such as the 2010 Census, the United States Postal Service (USPS), and the Internal Revenue Service (IRS), in a number of ways, such as by identifying vacant housing units or enumerating households in cases of nonresponse. The Bureau has reported that the following three uses are key to the Bureau potentially saving up to $1.4 billion compared to traditional census methods. The Bureau tested each of these uses during its 2015 Census Test and has decided to use them. Identify vacant housing units. The Bureau incurs a large part of the census’ cost while following up at residences that did not return a census questionnaire. To ensure a complete count, Bureau guidance in 2010 had enumerators visit some places up to six times to try to obtain a response. During the 2010 Census, enumerators visited 48 million housing units for follow-up at least once. This number included 14 million vacant housing units. One of the largest potential efficiency gains to the census may come from using administrative records to remove these vacant units from the follow-up workload. Preliminary findings from the Bureau’s 2015 site test found that administrative records identified 11.6 percent of the NRFU workload as vacant. Identify and enumerate nonresponding housing units that are occupied. Another way the Bureau can reduce the NRFU workload is to use administrative records to count households that did not return census questionnaires. As part of its 2015 Census Test, the Bureau successfully enumerated households using various administrative records. The Bureau tested three approaches to counting nonresponding households during the 2015 site test. In the approach that most extensively used administrative records, the Bureau did not attempt any NRFU visits and enumerated all occupied households that had administrative records meeting a certain quality threshold. The enumerators who used this approach had an initial workload of approximately 29,000 households, and administrative records were used to enumerate more than 5,800 of these households—which reduced the workload for this approach by about 20 percent. Predict best times to complete NRFU. One of the challenges the Bureau faces in NRFU is reaching a household at a time when someone is home. Catching respondents at home on an enumerator’s first visit reduces the need for more follow-up fieldwork. In the 2015 Census Test, the Bureau used administrative records in addition to information about how households had responded to other Bureau surveys to help determine the contact strategy for deciding if and when to interview a housing unit. For example, the Bureau used demographic information, such as age, from administrative records sources to determine the time of day to contact households. The Bureau has identified nine additional uses of administrative records that may help control cost or improve the quality of decennial census data or operations (see figure 1). The Bureau has not estimated cost savings for these nine uses, but has begun researching the feasibility of most of them. As shown in the figure, these uses would occur during various points relative to data collection. Before data collection. The first use listed in the figure—validate and update the address list—is one on which the Bureau is already working. The Bureau is drawing on address lists and map information from state, local, and tribal governments to update its own address list continuously throughout the decade, reducing the need for a more costly door-to-door canvassing during the 2 years prior to the census, as was done for the 2010 Census. According to Bureau officials, the Bureau is about to begin research on how better to use records to identify group quarters, such as dormitories, prisons, nursing homes, or homeless shelters; and to target its outreach, that is, encourage cooperation with the census. The Bureau uses special procedures to enumerate at these places, and administrative records could potentially reduce the time and effort spent getting ready for them. During data collection. In addition to reducing the NRFU fieldwork, the Bureau is considering using administrative records to help ensure quality control of fieldwork such as by providing a near-real-time check on interviews of households more at risk of being missed in the census. This could reduce fieldwork and respondent burden or enable quality control reinterviews of respondents to target other types of quality concerns. The Bureau is also researching how administrative records can be used to help process responses it receives either on paper or over the Internet that do not have a census ID number on them (this activity is called non- ID processing). The Bureau may receive such responses from households that may have lost or never received mailings or other advance communication from the Bureau. The Bureau has done some testing on this use—in 2015 in the Savannah, Georgia, media market area, the Bureau invited test participants to respond over the Internet. The Bureau demonstrated that a large collation of administrative records from many sources was effective in helping the Bureau correct or fill in missing address information, which enabled the Bureau to better locate where those responses should be counted. The Bureau has other ongoing research into how other records may help the Bureau validate responses or the identities of those who submit responses as part of this processing. After data collection. If the Bureau still does not have information on a housing unit after collecting data during field operations, it will attempt to impute the data—it has done this since 1970. According to Bureau officials, the Bureau plans to use administrative records to help improve imputation of three related types of data the Bureau fills in for these housing units. These data fields are (1) whether or not a unit is occupied, (2) what the count of the unit might be, and (3) the demographic characteristics of the residents. Finally, the Bureau is considering how administrative records might help the Bureau evaluate the accuracy of the census. The Bureau’s research and testing on administrative records to help with the address list, the two uses within non-ID processing, and the three related imputation methods, are well underway. The Bureau reports having achieved some early success already by demonstrating records’ use in updating the address list and in locating respondents’ addresses as part of non-ID processing. Bureau officials said that they had not started research and testing on the three remaining potential uses because these uses are less likely to generate significant cost savings or they fall much later in the decennial cycle, and the Bureau considered them a lower priority for the limited funding available for research thus far in the decade. According to Bureau officials, the Bureau will begin research on the remaining uses during fiscal year 2016. According to the Bureau, nearly every opportunity to use administrative records for the 2020 Census would involve more than one source of records and most opportunities would involve many sources. The Bureau has identified and obtained access to nearly all of the sources it believes it needs to leverage all of the opportunities it has identified, including the three uses (identify vacant housing units; identify and enumerate nonresponding housing units that are occupied; and predict best times to complete NRFU) the Bureau believes will generate a large portion of its estimated $1.4 billion savings from the cost of traditional methods. These sources are summarized in figure 2. The Bureau reported it has tested all of these sources already. As of July 2015, the Bureau has memorandums of understanding in place with seven federal agencies governing the use of data from 15 different programs and activities. The Bureau is leveraging data from state governments involved in three federal grant programs for low-income individuals that tend to serve sociodemographic groups (i.e., children and infants) that have historically been undercounted in the census, the Temporary Assistance for Needy Families (TANF), the Supplemental Nutrition Assistance Program (SNAP) and the smaller Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Bureau officials believe that data maintained by states that administer the programs funded by these grants can reliably identify beneficiaries and their addresses, and can potentially add support to each of the potential uses the Bureau is considering. According to the Bureau, while it has agreements in place with nine states thus far to obtain their TANF, SNAP, or WIC program data, working individually with states can be time consuming. Several states have declined the Bureau’s requests to share data, citing various information technology limitations and resource constraints. The Bureau has invited all states to share data with it and officials said they are prepared to proceed with those that choose to participate. As part of its program to validate and update its address list throughout the decade (rather than only during the 2 years prior to the census, as was done for the 2010 Census), the Bureau is seeking participation from state, local, and tribal governments. To participate, the governments must reliably maintain address lists, such as for the purposes of emergency response or property assessment, and be willing to share information with the Bureau. Thus far, to validate and update its address list, the Bureau has drawn on address lists and map information from more than 1,000 state, local, and tribal governments. Bureau officials have told us that they expect to receive reliable data covering about two-thirds of the more than 3,200 counties in the country through this program. The Bureau is working to gain access to additional sources of records to better ensure the quality of the data it already has access to and to improve its ability to find “hard to count” groups. The additional sources include the following: National Directory of New Hires (NDNH): NDNH is a national database of wage and employment information used for child support enforcement. Bureau officials believe that name and wage information from NDNH could help corroborate the tax data from IRS that the Bureau already has access to, improving the collective accuracy of the records. The President’s 2016 budget submission included a request for legislation that would authorize the Department of Health and Human Services to share NDNH data with the Bureau for statistical purposes such as the decennial census. KidLink: KidLink is a database from the Social Security Administration (SSA) that links parent and child Social Security numbers for children born after 1998 in U.S. hospitals. It is valuable to the Bureau because children, and babies less than 1 year old in particular, have been historically undercounted. Bureau officials have said that access to this database could help identify another 1 million people. According to the Bureau, SSA raised issues about Bureau access to these data. The Census Bureau Director says that he will work with departmental staff and the Office of Management and Budget to explore an administrative solution that may provide the Bureau with access. Obtaining access is challenging because federal, state, local, and tribal agencies have different authorities and policies governing what, whether, and how they share their administrative data. For example, the Bureau has access authority to IRS tax data. Yet for other data, such as NDNH, the Bureau is not authorized by statute to have access. Bureau officials stated that they are examining ways to quantify the potential effect that their access to these additional sources could have on the 2020 Census. The Bureau estimated that the value of acquiring the NDNH and using it to corroborate data from IRS, in conjunction with other administrative and third-party data sources, would be approximately $157.5 million (using 2010 figures and dollars). This assumes no nonresponse follow-up visits for cases with administrative and/or third- party data, so that the actual savings would likely be less since the Bureau recently decided to make at least one follow-up visit before enumerating a household with administrative records. Bureau officials state that there is value in accessing these records for the Bureau’s other statistical surveys as well, and that even if they are unable to obtain the additional records in time for the 2020 Census, they would continue pursuing them for these other purposes, as well as for use in future censuses. As of August 2015, the Bureau had not set deadlines to determine when to make final decisions on which of its 12 identified uses of administrative records it will implement for the 2020 Census, nor had it set deadlines for determining exactly which records from which sources it will tap in support of each use it implements. Moreover, the Bureau has no deadlines it can use against which to measure progress for obtaining access to its additional sources or scheduled milestones for when key steps may need to be taken to integrate them within 2020 preparations. For example, time will be needed to review files to ensure their fitness for use before the Bureau can integrate them into the census design. According to our scheduling guide, assurance of program success can be increased when management relies on credible schedules containing the complete scope of activities necessary to achieve established program objectives. Bureau officials have stated that final decisions on the use of administrative records are needed by the end of fiscal year 2017 to be included in the Bureau’s 2018 end-to-end test. But, these deadlines do not appear in schedule documents. Deadlines for deciding on the remaining potential uses—either committing to move forward with them or abandoning them as possibilities for 2020— and for deciding how all other records will be used would help to ensure the Bureau is using its resources cost-effectively. Although the Bureau has no control over the accuracy of data provided to it by other agencies, it is responsible for ensuring that data it uses for 2020 Census are of sufficient quality for their planned uses. Data quality can involve the accuracy, relevance, and timeliness of the data. Steps taken: The Bureau has taken many steps to ensure the quality of the records it is considering using for 2020. The Bureau’s Center for Administrative Records Research and Application screens all administrative records the Bureau has obtained to ensure their fitness for use by assigning unique person and address identifiers to facilitate record linkage, evaluating biases associated with the linkages, and evaluating the quality and coverage of data. Similarly, the Bureau’s Geography Division routinely screens address and map files provided by state, local, and tribal governments to determine if they satisfy preset minimum quality standards for completeness of address information. This helps to improve the master list of addresses. The Bureau’s Administrative Records Modeling project team has researched several predictive models for identifying thresholds of sufficient quality for administrative records used in identifying occupied and vacant housing units during NRFU. The Bureau has researched which combinations of records more fully cover the population. Among other findings, this research has helped the Bureau refine what combinations of administrative records work better to determine whether a housing unit is occupied or unoccupied, or for determining the number of people living in it. Relatedly, in August 2015 the Bureau was measuring how well administrative records cover hard-to-count groups such as children and people who were born in other countries. The Bureau planned to finish this work by October 2015. The Bureau reported that 2014 Census Site Test results found that administrative records matched some households better than others. For example, 65 percent of households with one adult and zero children matched to actual records, but only 33 percent of households with three adults and one or more child matched to administrative records. Since records need to be timely to be most useful, according to the Bureau, it negotiated to obtain monthly files beginning in February 2015 from IRS tax returns for use in the 2015 Census Test, which it conducted around the Census Day of April 1, 2015. As a result, the Bureau obtained the records several months earlier in the calendar year than it had in the past. Future plans: The Bureau plans comprehensive testing of all records during an end-to-end test of its 2020 Census design (to be conducted in 2018). The Bureau plans additional testing of administrative records for the 2016 Census Test in the Los Angeles and Houston metro areas, in a large test of address canvassing also in 2016, and in an additional site test in 2017 at an as yet undetermined location. The Bureau reported it will review imputation models it used during prior censuses to determine how it can integrate information from administrative records into them in fiscal year 2016. Tests will be included in the 2016 Census Site Test. We have previously reported that until the Bureau implements a complete and comprehensive security program, it will have limited assurance that its information and systems are being adequately protected against unauthorized access, use, disclosure, modification, disruption, or loss. In January 2013, we made 115 recommendations aimed at addressing weaknesses in that program. The Bureau expressed broad agreement and said it would work to find the best ways to address our recommendations. In July 2015, the Bureau reported that it experienced an information technology attack to gain access to the Federal Audit Clearinghouse that contains nonconfidential information on audit reporting packages from state and local governments, nonprofit organizations, and Indian tribes expending federal awards. Federal agencies use the single audit reports to ensure program compliance. The Bureau is making additional clearinghouse information available via the Internet next year. According to Bureau officials, the breach was limited to this database on a segmented portion of the Bureau’s network that does not touch administrative records or sensitive respondent data protected under Title XIII, and the hackers did not obtain the personally identifiable information of census and survey respondents. Steps taken: The Bureau cited examples of its past long-standing experience in collecting data from other agencies and reporting on it as evidence of the Bureau’s ability to prevent disclosure of information from such sources: Since 1972, the Bureau’s Survey of Business Owners has collected data on businesses from administrative records, including data from the Social Security Administration. The Bureau’s Longitudinal Employer-Household Dynamics program produces information combining federal, state, and Bureau data on employers and employees. This program collects and secures administrative records information from all 50 U.S. states, including unemployment insurance earnings data. During negotiations for access, the Bureau and the agency providing the data agree to data safeguards. For example, the Bureau’s agreement with IRS states that the Bureau will advise its employees of their responsibility for handling federal tax information, and will annually certify that all employees who access federal tax information have been advised about their obligation to protect the information. Further, the Bureau is required to provide annual reports to IRS that include the type of computer system and type of medium on which the data are contained. Once the Bureau obtains access to an administrative data source, it transfers the information that it needs to Bureau servers and maintains the information within the Bureau’s firewalls and information security infrastructure. Our open recommendations underscore the importance of the Bureau safeguarding its systems. Bureau officials state that the Bureau has taken action on all 115 of our recommendations to improve its security program. In assessing the Bureau’s reported actions, we have reviewed documentation pertaining to 75 of the recommendations—58 of which we have confirmed have been addressed and 17 require additional actions and/or documentation from the Bureau. We are currently analyzing the extent to which the remaining 40 recommendations have been addressed by the Bureau and expect to complete that review by the end of 2015. A third challenge is the extent to which the public will accept the sharing of personal data across government agencies for the purposes of the census. We have previously reported on the need within the federal statistical system for broader public discussion on balancing trade-offs among competing values, such as quality, cost, timeliness, privacy, and confidentiality. Related concerns involve trust in the government and perceptions about burden on respondents as well the social benefits of agencies sharing data. We recommended in 2012 that the Bureau develop and implement an effective congressional outreach strategy, particularly on new design elements the Bureau is researching and considering as well as on cost-quality trade-offs of potential design decisions. The Bureau concurred with the recommendation and has taken a number of steps since that are likely to help inform congressional decision making, which we describe below. Steps taken: In 2013, the Bureau contracted for regular polling of nationally representative individuals on the extent to which they prefer data to come from information already provided to federal and state governments or from a survey they fill out. Findings included that respondents were evenly divided when asked whether they prefer the Bureau to obtain someone’s name and age directly from the Social Security Administration rather than asking for this information on a questionnaire. In 2013, the Bureau began hosting quarterly program management reviews encouraging dialogue with oversight on selected technical aspects of the Bureau’s ongoing research and testing. These reviews are open to the public and viewable online over the Internet. These supplement the Bureau’s monthly status reports on ongoing research projects that the Bureau provides to Office of Management and Budget and, later, Congress. Future steps: The Bureau is developing a communications campaign for 2020, which it will formally launch in 2016. The campaign will include information about how the Bureau intends to use administrative records in the 2020 Census. Given the many potential uses of administrative records the Bureau has identified, it will be important for the Bureau’s messaging to consider the range of uses. For example, some people may feel differently about the Bureau using administrative records for enumerating as opposed to targeting the time of day they will be contacted by the Bureau. Moving forward, to help support broader public discussion on trade-offs that the Bureau may need to make on the role of administrative records in the 2020 Census, the Bureau should address our prior recommendation to develop and implement an effective congressional outreach strategy, particularly on new design elements the Bureau is researching and considering, as well as on cost-quality trade- offs of potential design decisions. In response to our 2012 recommendation, in November 2014 the Bureau provided us with a congressional engagement plan. The four-page plan brings together in one place a summary of the Bureau’s ongoing activity in this area, yet, by itself, lacks goals or strategies for attaining them, or accountability for who will work to implement them or when. We will continue monitoring the Bureau’s efforts to address this recommendation, particularly as they may depend on deadlines the Bureau may yet set for making final decisions about administrative records. The Bureau had several objectives for its 2015 Census Test. For example, the Bureau wanted to begin the process of developing a field operations control system that combined administrative records, technology, and available real-time data to improve the efficiency of field data collection. In addition, it planned to collect data on whether using administrative records could reduce NRFU workload and increase NRFU productivity. Senior Bureau officials told us that the test was also to provide data to help inform future cost estimates and design decisions. The Bureau designed the 2015 Census Test to test three approaches to NRFU, and used one each within three different parts of the test sample: one that followed procedures similar to the 2010 Census and two that used experimental approaches. Each part of the sample had a workload of around 23,000 housing units. Control panel. This panel followed NRFU procedures similar to those used in the 2010 Census. Like the 2010 Census, the operation was managed from a field office, enumerators compiled their timesheets by hand daily, and supervisory staff had regular face-to-face meetings with their enumerators. Hybrid Administrative Records Removal Panel. For this panel, the Bureau used administrative records to identify vacant housing units and remove them from the NRFU workload before enumerators attempted contact. Then enumerators were to make one attempt to contact the remaining housing units. If the attempt was unsuccessful, the Bureau used administrative records to attempt to enumerate the housing units. Full Administrative Records Removal Panel. This panel followed the same approach as the hybrid to remove vacant units from the NRFU workload. But the Bureau also attempted to use administrative records to count occupied households before enumerators attempted any contacts. Bureau officials said they consider the 2015 Census Test a large success because it allowed them to employ a variety of new methods and advanced technologies that are under consideration for the 2020 Census. Our review of key tracking documents showed that the Bureau executed key milestones for the test early or on schedule. Additionally, Bureau officials stated that operational costs tracked closely to planned costs, and that actual field workload was within a few percentage points of that planned for each test panel. Another success of the 2015 Census Test is that during our observations the Bureau maintained control over test panels so that they did not appear to influence each other. For example, field test managers with whom we met appeared largely unaware that other test panels existed or how their procedures may be different. According to the Bureau, the test also demonstrated the usefulness of continuing work on the enhanced operational control system. Some of the tests biggest achievements were in demonstrating the feasibility of using administrative records: a total of 8,370 vacant units were identified for removal from the NRFU workload of 72,072 units (11.6 percent), a total of 14,312 cases (19.8 percent of workload) were identified as occupied through the use of administrative records. Depending on the test panel, this information was used instead of knocking on doors of neighbors or others when respondents were not home and enumeration attempts were exhausted, and the race/ethnicity for persons who did not provide it in their responses was identified. These test results are linked to assumptions that need to be met to attain the Bureau’s cost savings estimate and assisted the Bureau in making its preliminary design decisions and updating its cost estimates. For example, the Bureau’s October 2015 operational plan reports that the Bureau will use an approach during NRFU like that for the hybrid administrative records removal panel during the 2015 test, making only one visit to nonresponding households where the Bureau has determined administrative records are good enough to complete the roster and enumerate the household. As part of its testing related to administrative records and proving capabilities of prototype systems implementing selected design features in the 2015 Census Test, the Bureau collected data on the extent to which administrative records reduced field data collection and improved productivity; two key cost-drivers in prior decennials. During the site test, the Bureau experienced specific implementation issues, and these in turn affected the measurement of the key cost drivers. We did not select a generalizable sample of the over 400 enumerators hired during the test to interview. Yet, our observations were consistent across the 30 enumerators we met and were echoed during the four debriefings the Bureau held with field staff near the conclusion of the site test. We observed enumerators spending extra time dealing with enumeration devices that were not working properly, lacked connectivity, or that they may not have been using properly. Enumerators also told us about experiences similar to our observations. These issues affected measures related to total hours spent on the NRFU operation. We observed several instances of enumerators being assigned to enumerate repeatedly the same nonresponding or nonexistent households, which the design alternative would not have permitted had it been implemented correctly. This affected measures related to reducing contact attempts. We observed inefficiencies in automated route management, poor cellular service leading to limited or no access to smart phone maps while enumerators tried to locate addresses, and, to a lesser extent, enumerators having difficulties closing out or completing their cases, leading to additional visits. These issues are likely to have decreased various productivity measures. Additionally, we observed situations where multiple enumerators were visiting multiunit structures and gated communities without coordination and communication between each other. This affected measures related to reducing contact attempts. Based on our discussions with enumerators about these issues, it appears that the incidence of the issues was underreported to the Bureau, which had intended to track such issues. For example, all but 4 of the 30 temporary enumerators and field supervisors we spoke with said they tried their own temporary solutions to problems they faced, such as powering down and restarting their Bureau-issued phones when they froze, rather than notifying supervisors or the Bureau’s call-in number every time. We heard this repeated during the Bureau’s debriefings of enumerators near the end of the test. Additionally, enumerators we spoke with reported using the capability within their phones to record notes about specific cases with which they experienced implementation problems. According to senior managers responsible for managing the test implementation, during the test there was no systematic review of these notes planned other than when an enumerator had separately flagged a case as involving a dangerous situation, or was being let go for performance reasons and the enumerator’s work was being reviewed. After we discussed the notes with senior Bureau officials, they told us that they would explore the notes for information that would help the Bureau better understand the implementation issues. They also told us that during enumerators’ debriefings, they learned enumerators had not always known to whom to report what types of issues. Bureau training materials provided a toll-free number for enumerators to call regarding any technical issue if first calling the enumerator supervisor could not resolve it. Bureau officials acknowledged that problems with the implementation of the test likely affected productivity measures to be used in calculating future cost estimates. According to leading practices reported in our cost estimation guide, information about the extent to which implementation issues affect test data should be collected, as they can be useful in future cost estimation. However, the effect on the measures is difficult to determine because the Bureau did not systematically keep data about the extent of the problems. Bureau officials said they are analyzing the results of the test to determine the extent to which identified implementation issues affected the productivity measure, so that they could better control for and understand the separate effects of implementation issues. Moreover, in future tests, systematically collecting better information about the effect of implementation issues may help inform how the Bureau prioritizes specific design features underlying those issues. For example, if in future tests the Bureau systematically tracks the cases where, say, its automated routing may create problems for enumerators, the Bureau could know the extent to which that implementation issue affected its test measures, like average miles driven per case, and the resulting effect on cost. With such information, the Bureau could make a more informed decision about prioritizing its efforts and resources on ensuring that routing works in systems it ultimately develops or acquires, or whether to get rid of automated routing altogether. Such information can also help inform cost and risk analyses the Bureau may undertake based on the possibilities that such specific implementation issues may also occur during the 2020 Census. Accordingly, steps the Bureau can take to better capture information from enumerators about implementation issues should they arise in future tests will also help future cost estimation. Such steps might include additional or revised training elements for enumerators on what details to report and the logistics of reporting them: to whom—such as the help desk or their supervisors— where—such as within the notes capability on their phones or elsewhere—and how. The Bureau had five key assumptions that support its estimate that using administrative records for the 2020 enumeration will cost less than relying on a traditional design and methods (see table 1). Bureau officials responsible for cost estimation described each of the assumptions to us and showed us where the assumptions were represented in key planning documents. The first three assumptions will only materialize if certain conditions are met. For example, to reduce the number of field offices, the Bureau will need to demonstrate that it can reduce the NRFU workload, as the workload is a key driver of the number of field offices needed. The Bureau articulated the last two assumptions as business decisions about its operations. We reviewed whether the assumptions were logically represented in the calculations used to produce the Bureau’s available cost estimates, and how the Bureau had supported the assumptions through prior Bureau experience, research, or testing. As shown in table 2, overall, the Bureau’s assumptions are logical and have support. The Bureau plans to further test some of these assumptions to validate the assumptions further or to identify needed revisions. The Bureau’s planned tests in 2016 and beyond will inform several of these assumptions: Identify and remove vacant units from the NRFU workload. Now that the Bureau has successfully demonstrated it can identify and remove vacant units from the NRFU workload using administrative records, it plans to continue testing this activity to gather additional data on how much the NRFU workload can be reduced. Reducing the number of field offices. Although the Bureau announced in October that it would open no more than 250 offices in 2020, the Bureau plans to conduct future large tests at other sites and to involve other operations in the tests to gather additional information about the number of field offices needed for 2020. Limiting the number of NRFU visits. Future tests will provide the Bureau opportunities to try to control the maximum number of visits enumerators make, and determine if it can reduce implementation issues that caused repeat visits during the 2015 test. Bureau officials told us that research is underway that will help validate that the new uses of administrative records will provide the benefits that the VDC and CFU operations formerly provided. They said that reliance on administrative records can provide an alternative to field visits intended simply to verify whether a housing unit exists or is vacant. Further, they said that the use of administrative records before and during the enumeration will remove the need for a CFU operation checking against the records after the enumeration, as was done in the 2010 Census. While we were reviewing these cost assumptions, documentation was not always readily available, and Bureau reporting on one of the assumptions needed to be corrected. We were able to identify the needed support. Bureau staff said that, moving forward, they decided to change the methodology for future reporting on the cost estimate to involve more factors and variables, such as the ratio of field workers to supervisors they would need in 2020 in addition to the NRFU workload assumption. This change will help demonstrate the reliability of the estimates as well as ensure effective communication with others about them. Bureau officials told us that the revised total life-cycle cost estimate the Bureau released on October 6, 2015, was developed with leading practices from our cost estimating and assessment guide. After the Bureau releases the underlying model, methodology, and supporting documents for the estimate, we anticipate reviewing them to assess their reliability. Although administrative records have been discussed and used for the decennial census since the 1970s, the Bureau plans a more significant role for them to reduce the amount of data collection fieldwork to reduce the cost of the decennial census in 2020. The Bureau appears to have demonstrated the feasibility and potential effectiveness of administrative records for several uses during NRFU, which the Bureau estimates could save up to $1.4 billion compared to traditional census methods. The Bureau has also identified several additional opportunities to leverage administrative records to help improve the cost and quality of the 2020 Census. Yet, the Bureau will need to consider when to end research pursuits that show less promise for substantially reducing the census’ cost or meeting other 2020 goals so that it can focus resources on successfully refining and implementing activities that have greater potential. Knowing deadlines for when final go/no go decisions need to be made about which records the Bureau will use, how it will use them, and for which purposes will help ensure necessary activities are completed on time. Deadlines regarding still uncertain purposes or those involving records the Bureau is still pursuing, such as NDNH and KidLink, as well as those from some states, will also help the Bureau prioritize which activities—or records—to continue pursuing or to abandon if time becomes a constraint. The Bureau’s 2015 Census Test was generally an operational success in that it provided much useful information to inform cost estimates and decisions about how to design future operations. Test results and the information about implementation issues we and the Bureau documented should prove useful as the Bureau moves forward to refine its business requirements for further testing. As the Bureau plans for its 2016 tests, getting better information from them about which fieldwork cases are not being implemented as planned could help the Bureau link future estimates of cost savings to, and prioritize, design features it wants or needs to work well in final systems it develops, or to eliminate problematic features it deems not worth the trouble. The Bureau could capture this information more systematically through the help desk, within the notes capability on the phones used for interviewing or elsewhere, and by ensuring enumerators received training on where to record the issues, who to contact, what details to include, and the importance of doing so. The bottom line is that the amount of time the Bureau has to research and test the range of options for 2020 and ensure their readiness is limited. The Bureau has a lot of activity to follow through on in the time remaining, so it will be important to reduce the number of open options to ones the Bureau can manage well when turning the corner from research to a focus on development and implementation. The Bureau’s early cost savings assumptions related to its use of administrative records are logical, and the Bureau is taking steps to develop further support for them. We plan to review the Bureau’s October 2015 cost estimate and latest estimates of savings from using administrative records after the Bureau makes supporting documentation available. We recommend that the Secretary of Commerce direct the Under Secretary of the Economics and Statistics Administration and the Director of the U.S. Census Bureau to take the following two actions to help ensure the Bureau focuses its resources on those activities that show promise for substantially reducing enumeration cost. Establish clearly documented deadlines for making final decisions about which records to use for what purposes, particularly for purposes not yet demonstrated as feasible or involving records it does not already have access to, such as NDNH and KidLink. In advance of the 2016 Census Test and later tests, ensure systematic capture of information about fieldwork cases that experience problems by including information in enumerator training about where to record the issues, who to contact, what details to include, and the importance of doing so. We provided a draft of this report to the Secretary of the Department of Commerce for comment. In its written comments, reproduced in appendix II, the Department of Commerce concurred with our recommendations. The Department of Commerce also provided minor technical comments that were incorporated, as appropriate. We are sending copies of the report to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff that made major contributions to this report are listed in appendix III. The purpose of our review was to examine the Census Bureau’s (Bureau) plans for using administrative records for the 2020 Census. Specifically, our objectives were to review (1) the Bureau’s plans for using administrative records for 2020 and what opportunities and challenges the Bureau faces in using them; (2) the extent to which the Bureau’s key 2015 test of administrative records were implemented in accordance with its testing objectives and what the Bureau’s experience implementing selected aspects of the test was; and (3) key assumptions supporting the cost savings estimates to be achieved from administrative records. For all objectives, we reviewed documentation from the Bureau on 2020 research and testing of administrative records, and reviewed documentary and testimonial evidence from Bureau officials responsible for research and testing the use of administrative records. To address the first objective, we identified the administrative records the Bureau is considering and linked them to their possible uses the Bureau is considering for 2020. We identified which records the Bureau has access to and examined the Bureau’s authority under Title XIII and other statutes to use information from other agencies for the decennial census. We identified what decisions remain for the Bureau regarding administrative records as well as the Bureau’s timelines for making those decisions. We relied on our Schedule Assessment Guide as a source of criteria for assessing activity the Bureau plans for administrative records. We took steps to verify that the related schedule data we examined was reliably representing the Bureau’s schedule, such as by comparing the respective inclusion of major projects as well checking that activities occurred in both. To address the second objective, we examined the Bureau’s 2015 Census Test, which took place in Maricopa County, Arizona. Using the Bureau’s test objectives as criteria, we conducted direct observations and interviews to assess the Bureau’s implementation of the test, and collected performance metrics on the test from the Bureau. We also documented where we observed implementation deviating from either what Bureau temporary enumerators had been trained to expect, or what we expected based on our prior experience with census field operations. We also identified problems appearing to arise with implementation. We conducted 30 in-field observations of Bureau enumerators conducting Nonresponse Follow-up during the test and interviewed Bureau employees managing the test. Our sample of field observations was stratified for geographic balance across the test site area, but was not designed necessarily to be generalizable. When examining the extent to which implementation issues may have affected measurement of key cost drivers in the 2015 Census Test, we relied on our Cost Estimating and Assessment Guide for criteria. We communicated implementation issues we observed in near-real time to Bureau officials for their consideration as they conducted their own evaluation of the test and ongoing related research. To address the third objective, we inventoried the working cost assumptions provided by the Bureau and isolated those related to administrative records. To determine if assumptions were logical, we traced their incorporation into respective calculations within the cost model the Bureau used to produce the Bureau’s earlier cost estimates available to us at the time of the audit. We examined the support and justifications the Bureau had documented for each assumption, the related results of the 2014 and 2015 Census Tests, and related materials the Bureau made available to us during the audit. When examining the extent to which implementation issues may have affected measurement of key cost drivers in the 2015 Census Test, we relied on our Cost Estimating and Assessment Guide for criteria. We did not review the Bureau cost estimation methodology or the reliability of either its preliminary cost savings estimates, or any cost estimation material information that was part of the Bureau’s October 2015 release. We conducted this performance audit from January 2015 through October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Ty Mitchell, Assistant Director; Jeffrey DeMarco; Robert Gebhart; Richard Hung; Andrea Levine; Donna Miller; Shannin O’Neill and Timothy Wexler. | The cost of the decennial census has steadily increased during the past 40 years, prompting the Bureau to reengineer key census-taking methods for the 2020 Census, including making greater use of information from administrative records. Given the potential cost savings associated with the use of administrative records, GAO reviewed (1) the Bureau's plans for using them and what opportunities and challenges the Bureau faces going forward; (2) the extent to which the Bureau's key 2015 test of them was implemented in accordance with objectives; and (3) the key assumptions supporting estimates of expected cost savings. To meet these objectives, GAO reviewed Bureau planning documents and test plans, interviewed Bureau officials, and observed implementation of the 2015 Census Test in Arizona. GAO also relied on its Schedule Assessment Guide. Increased reliance on administrative records—information already provided to the government as it administers other programs—has been discussed since the 1970s as a possible way to improve the quality or reduce the cost of the decennial census, and it may finally play a significant role in the decennial census in 2020. The U.S. Census Bureau (Bureau) estimates that it can save $1.4 billion using administrative records, compared to relying solely on traditional methods. The Bureau recently completed its 2015 Census Test in Maricopa County, Arizona—a major test involving administrative records. The Bureau used this census test to demonstrate the feasibility of using administrative records to reduce the cost of its largest decennial field operation, following up door to door to enumerate households that do not respond to the census. Yet turning this estimated savings—and the potential savings from other uses of the records, such as using administrative records to help validate and update the address list rather than having to send temporary workers to every housing unit in the country—into a real cost reduction for the taxpayer will require detailed planning that includes milestones for ensuring outstanding challenges are addressed. This would include preventing disclosure of records and addressing concerns the public may have over their use, and obtaining access to remaining records. The Bureau has not set deadlines for deciding which records it will use and for which purposes, but doing so will help the Bureau complete needed activities on time and prioritize which activities—or records—to abandon if time and resources become a constraint. Bureau officials said they consider the test a large success because it demonstrated a variety of new methods and advanced technologies that are under consideration for the 2020 Census. The test also demonstrated the feasibility of a prototype system for managing the field operation, yet implementation issues with some of the prototype technology were not systematically reported or tracked, and may have affected the usefulness of test data. Systematic problems arising during test interviews can affect key test measures, such as the number of hours spent going door to door. Knowing which cases experienced such problems can help link cost estimates to specific design features and prioritize future research, development, and acquisition efforts. Key assumptions the Bureau used in estimating potential cost savings from administrative records are logical, and the Bureau plans to provide additional support for them. For example, the Bureau's assumption that it could reduce its follow-up workload follows clearly from the Bureau's use of administrative records to remove vacant units from among those housing units needing follow-up because people did not respond to the census, reducing that workload by 11.6 percent. This assumption was also validated by the Bureau's experience in its recent test, and the Bureau plans further testing of this assumption during future tests in 2016 and beyond. The Bureau released an updated life cycle cost estimate in October 2015, and GAO anticipates reviewing its reliability after the Bureau makes available support for the estimate. GAO recommends that the Census Director ensure that resources focus on activities with promise to reduce cost by documenting milestones related to deciding which records to use for which purposes and by systematically recording better information about implementation issues affecting specific cases in future tests. The Department of Commerce concurred with GAO's findings and recommendations, and provided minor technical comments, which were included in the final report. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.